diff --git a/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_content_list.json b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5d6989ebef03f6a73f37230daeedef47cb9ebd98 --- /dev/null +++ b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c8344c6b301a25ba30e72f92985d6e7f6f4189b85f2b6b49fb35d7713d7e9f2 +size 155023 diff --git a/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_model.json b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a44c037c697a2179b4bb3def3b0f80ad0c38b972 --- /dev/null +++ b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:195324d6946bce2fe6c536bc8e571c70ef3057a082d02ed68fc9a570d8152d46 +size 181382 diff --git a/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_origin.pdf b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..39125f6fe2b013a95b9d08c1854990299a066354 --- /dev/null +++ b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/b48b3244-119a-4fd6-bc8d-c5f30e313002_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c660440b833d38bfc3baab760a848facaceed7adb9526c2135213a66d0f8c06d +size 3672518 diff --git a/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/full.md b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/full.md new file mode 100644 index 0000000000000000000000000000000000000000..87bd20fd924688d8d3837e8345e1f8388266808a --- /dev/null +++ b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/full.md @@ -0,0 +1,742 @@ +# $f$ -Policy Gradients: A General Framework for Goal Conditioned RL using $f$ -Divergences + +Siddhant Agarwal + +The University of Texas at Austin siddhant@cs.utexas.edu + +Peter Stone + +The University of Texas at Austin +Sony AI +pstone@cs.utexas.edu + +Ishan Durugkar + +Sony AI + +ishan.durugkar@sony.com + +Amy Zhang + +The University of Texas at Austin amy.zhang@austin.utexas.edu + +# Abstract + +Goal-Conditioned Reinforcement Learning (RL) problems often have access to sparse rewards where the agent receives a reward signal only when it has achieved the goal, making policy optimization a difficult problem. Several works augment this sparse reward with a learned dense reward function, but this can lead to sub-optimal policies if the reward is misaligned. Moreover, recent works have demonstrated that effective shaping rewards for a particular problem can depend on the underlying learning algorithm. This paper introduces a novel way to encourage exploration called $f$ -Policy Gradients, or $f$ -PG. $f$ -PG minimizes the f-divergence between the agent's state visitation distribution and the goal, which we show can lead to an optimal policy. We derive gradients for various f-divergences to optimize this objective. Our learning paradigm provides dense learning signals for exploration in sparse reward settings. We further introduce an entropy-regularized policy optimization objective, that we call state-MaxEnt RL (or $s$ -MaxEnt RL) as a special case of our objective. We show that several metric-based shaping rewards like L2 can be used with $s$ -MaxEnt RL, providing a common ground to study such metric-based shaping rewards with efficient exploration. We find that $f$ -PG has better performance compared to standard policy gradient methods on a challenging gridworld as well as the Point Maze and FetchReach environments. More information on our website https://agarwalsiddhant10.github.io/projects/fpg.html. + +# 1 Introduction + +Reinforcement Learning (RL) algorithms aim to identify the optimal behavior (policy) for solving a task by interacting with the environment. The field of RL has made large strides in recent years (Mnih et al., 2013; Silver et al., 2017; Haarnoja et al., 2018; Ouyang et al., 2022; Wurman et al., 2022) and has been applied to complex tasks ranging from robotics (Gupta et al., 2019), protein synthesis (Jumper et al., 2021), computer architecture (Fawzi et al., 2022) and finance (Liu et al., 2021). Goal-Conditioned RL (GCRL) is a generalized form of the standard RL paradigm for learning a policy that can solve many tasks, as long as each task can be defined by a single rewarding goal state. Common examples of goal-conditioned tasks arise in robotics where the goal states can be a target object configuration for manipulation-based tasks (Kim et al., 2022; Gupta et al., 2019; OpenAI et al., 2021) or a target location for navigation-based tasks (Shah et al., 2020; Gervet et al., 2023). + +In any reinforcement learning setup, the task is conveyed to the agent using rewards (Silver et al., 2021). In goal-conditioned RL settings, a common reward function used is 1 when the goal is + +achieved and 0 everywhere else. This reward function is sparse and poses a huge learning challenge to obtain the optimal policy without any intermediate learning signal. Prior works (Ng et al., 1999; Ni et al., 2020; Durugkar et al., 2021; Arjona-Medina et al., 2019; Goyal et al., 2019) have augmented the reward function to provide some dense signal for policy optimization. A major issue with augmenting reward functions is that the optimal policy for the new reward function may no longer be optimal under the original, true reward function (Ng et al., 1999). Moreover, it has been shown (Booth et al., 2023) that shaping rewards that improve learning for one learning algorithm may not be optimal for another learning algorithm. Algorithms that learn reward functions (Ni et al., 2020; Durugkar et al., 2021; Zheng et al., 2018) are inefficient because the reward function must first be learned before it can be used for policy optimization. These challenges lead to the following research question: Is there another way to provide dense learning signals for policy optimization other than through dense shaping rewards? + +In this work, we look at using divergence minimization between the agent's state visitation and the goal distribution (we assume that each goal can be represented as a distribution, Dirac distribution being the simplest) as an objective to provide additional learning signals. Similar perspectives to policy learning has been explored by prior works (Ziebart et al., 2008; Haarnoja et al., 2017, 2018; Ho & Ermon, 2016; Ni et al., 2020; Ghasemipour et al., 2019; Fu et al., 2017), but they reduce their methods into a reward-centric view. MaxEnt RL methods (Ziebart et al., 2008; Haarnoja et al., 2017, 2018) use the distribution over trajectories rather than state visitations and still suffer from sparsity if the task rewards are sparse. Imitation learning works like those of Ho & Ermon (2016); Fu et al. (2017); Ghasemipour et al. (2019) use a variational lower bound to obtain min-max objectives that require discriminators. These objectives suffer from mathematical instabilities and often require coverage assumptions i.e., abundant overlap between the agent's state visitation distribution and goal distribution. Our method does not rely on discriminators nor does it assume state coverage. It provides dense signals to update the policy even when the agent has not seen the goal. These signals push the policy towards higher entropy state visitations until the goal is discovered. + +Our method, $f$ -PG or $f$ -Policy Gradient, introduces a novel GCRL framework that aims to minimize a general measure of mismatch (the $f$ -divergence) between the agent's state visitation distribution and the goal distribution. We prove that minimizing the $f$ -divergence (for some divergences) recovers the optimal policy. The analytical gradient for the objective looks very similar to a policy gradient which allows us to leverage established methods from the policy gradient literature to come up with an efficient algorithm for goal-conditioned RL. We show the connection of our method to the commonly used metric-based shaping rewards for GCRL like L2 rewards. We show that a special case of $f$ -PG jointly optimizes for maximization of a reward and the entropy of the state-visitation distribution thus introducing state-MaxEnt RL (or s-MaxEnt RL). Using a sparse gridworld, we establish the benefits of using $f$ -PG as a dense signal to explore when the agent has not seen the goal. We also demonstrate that our framework can be extended to continuous state spaces and scale to larger and higher-dimensional state spaces in maze navigation and manipulation tasks. + +Our key contributions are 1) developing a novel algorithm for goal-conditioned RL that provably produces the optimal policy, 2) connecting our framework to commonly known metric-based shaping rewards, 3) providing a new perspective to RL (s-MaxEnt RL) that focuses on maximizing the entropy of the state-visitation distribution and 4) empirical evidence demonstrating its ability to provide dense learning signals and scale to larger domains. + +# 2 Background + +This section goes over the standard goal-conditioned reinforcement learning formulation and the f-divergences that will be used in the rest of the paper. + +Goal-conditioned reinforcement learning. This paper considers an agent in a goal-conditioned MDP (Puterman, 1990; Kaelbling, 1993). A goal-conditioned MDP is defined as a tuple $\langle S, \mathcal{G}, \mathcal{A}, P, r, \gamma, \mu_0, \rho_g \rangle$ where $S$ is the state space, $\mathcal{A}$ is the action space, $P: S \times \mathcal{A} \longmapsto \Delta(S)$ is the transition probability ( $\Delta(\cdot)$ denotes a probability distribution over a set), $\gamma \in [0,1)$ is the discount factor, $\mu_0$ is the distribution over initial states, $\mathcal{G} \subset S$ is the set of goals, and $\rho_g: \Delta(\mathcal{G})$ is the distribution over goals. At the beginning of an episode, the initial state $s_0$ and the goal $g$ are sampled from the distributions $\mu_0$ and $\rho_g$ . The rewards $r: S \times \mathcal{G} \longmapsto \mathbb{R}$ are based on the state the agent visits and conditioned on the goal specified during that episode. This work focuses on sparse rewards, where $r(s', g) = 1$ when $s' = g$ , and is $r(s', g) = 0$ otherwise. In continuous domains, the equality is relaxed to $s' \in \mathcal{B}(g, r)$ where $\mathcal{B}(g, r)$ represents a ball around the goal $g$ with radius $r$ . + +A trajectory $\tau$ is defined as the sequence $(s_0, a_0, s_1, \ldots, s_{T-1}, a_{T-1}, s_T)$ . The return $H_g(s)$ is defined as the cumulative undiscounted rewards $H_g(s) := \sum_{t=0}^{T} [r(s_{t+1}, g) | s_0 = s]$ , where $T$ is the length of a trajectory. We will assume the trajectory ends when a maximum number of policy steps $(T)$ have been executed. The agent aims to learn a policy $\pi: \mathcal{S} \times \mathcal{G} \longmapsto \Delta(\mathcal{A})$ that maximises the expected return $\mathbb{E}_{\pi, s_0}[H_g(s_0)]$ . The optimal policy $\pi^* = \arg \max_{\pi_\theta \in \Pi} \mathbb{E}_{\pi, s_0}[H_g(s_0)]$ , where the space of policies $\Pi$ is defined by a set of parameters $\theta \in \Theta$ . + +Distribution matching approach to goal-conditioned RL. The distribution over goal-conditioned trajectories is defined as $p_{\theta}(\tau; g) = p(s_0)\Pi_{t=0}^{T}p(st|s_{t-1}, a_{t-1})\pi_{\theta}(a_t|s_t; g)$ . The trajectory-dependent state visitation distribution is defined as $\eta_{\tau}(s)$ . It is the number of times the state $s$ is visited in the trajectory $\tau$ . The agent's goal-conditioned state visitation can then be defined as: + +$$ +\begin{array}{l} p _ {\theta} (s; g) = \frac {\int p _ {\theta} (\tau ; g) \eta_ {\tau} (s) d \tau}{Z} (1) \\ = \frac {\int \Pi p \left(s _ {t + 1} \mid s _ {t} , a _ {t}\right) \pi_ {\theta} \left(a _ {t} \mid s _ {t} ; g\right) \eta_ {\tau} (s)}{\int \int \Pi p \left(s _ {t + 1} \mid s _ {t} , a _ {t}\right) \pi_ {\theta} \left(a _ {t} \mid s _ {t} ; g\right) \eta_ {\tau} (s) d \tau d s} d \tau . (2) \\ \end{array} +$$ + +The goal $g$ defines an idealized target distribution $p_{g} : \Delta(S)$ , considered here as a Dirac distribution which places all the probability mass at the goal state $p_{g} = \delta(g)$ . Such a formulation has been used previously in approaches to learn goal-conditioned policies (Durugkar et al., 2021). This work focuses on minimizing the mismatch of an agent's goal-conditioned state visitation distribution $p_{\theta}(s; g)$ to this target distribution $p_{g}$ . In this paper, we will be using $p_{\theta}$ and $p_{\pi}$ interchangeably i.e., $p_{\theta}$ corresponds to the visitation distribution induced by policy $\pi$ that is parameterized by $\theta$ . + +To do so, this paper considers a family of methods that compare the state-visitation distribution induced by a goal-conditioned policy and the ideal target distribution for that goal $g$ , called $f$ -divergences. $f$ -divergences are defined as (Polyanskiy & Wu, 2022), + +$$ +D _ {f} (P | | Q) = \int_ {P > 0} P (x) f \left(\frac {Q (x)}{P (x)}\right) d x - f ^ {\prime} (\infty) Q ([ P (x) = 0 ]), \tag {3} +$$ + +where $f$ is a convex function with $f(1) = 0$ . $f^{\prime}(\infty)$ is not defined (is $\infty$ ) for several $f$ -divergences and so it is a common assumption that $Q = 0$ wherever $P = 0$ . Table 1 shows a list of commonly used $f$ -divergences with corresponding $f$ and $f^{\prime}(\infty)$ . + +
f-divergenceDf(P||Q)f(u)f'(u)f'(∞)
FKL∫P(x) log P(x)/Q(x)dxu log u1 + log uUndefined
RKL∫Q(x) log Q(x)/P(x)dx- log u- 1/u0
JS1/2 ∫P(x) log 2P(x)/P(x)+Q(x)+Q(x)/P(x)+Q(x)dx + Q(x) log 2Q(x)/P(x)+Q(x)dxu log u-(1 + u) log 1+u/2log 2u/1+ulog 2
χ²1/2 ∫Q(x)(P(x)/Q(x) - 1)2dx1/2(u - 1)2uUndefined
+ +Table 1: Selected list of $f$ -divergences $D_{f}(P||Q)$ with generator functions $f$ and their derivatives $f'$ , where $f$ is convex, lower-semicontinuous and $f(1) = 0$ . + +# 3 Related Work + +Shaping Rewards. Our work is related to a separate class of techniques that augment the sparse reward function with dense signals. Ng et al. (1999) proposes a way to augment reward functions without changing the optimal behavior. Intrinsic Motivation (Durugkar et al., 2021; Bellemare et al., 2016; Singh et al., 2010; Barto, 2013) has been an active research area for providing shaping rewards. Some work (Niekum, 2010; Zheng et al., 2018) learn intrinsic or alternate reward functions for the underlying task that aim to improve agent learning performance while others (Durugkar et al., 2021; Ni et al., 2020; Goyal et al., 2019) learn augmented rewards based on distribution matching. AIM (Durugkar et al., 2021) learns a potential-based shaping reward to capture the time-step distance but requires a restrictive assumption about state coverage, especially around the goal while we do not make any such assumption. Recursive classification methods (Eysenbach et al., 2021, 2020) use future state densities as rewards. However, these methods will fail when the agent has never seen the goal. Moreover, in most of these works, the reward is not stationary (is dependent on the policy) + +which can lead to instabilities during policy optimization. GoFAR (Ma et al., 2022) is an offline goal-conditioned RL algorithm that minimizes a lower bound to the KL divergence between $p_{\theta}(s)$ and the $p_g(s)$ . It computes rewards using a discriminator and uses the dual formulation utilized by the DICE family (Nachum et al., 2019), but reduces to GAIL (Ho & Ermon, 2016) in the online setting, requiring coverage assumptions. Our work also minimizes the divergence between the agent's visitation distribution and the goal distribution, but we provide a new formulation for on-policy goal-conditioned RL that does not require a discriminator or the same coverage assumptions. + +Policy Learning through State Matching. We first focus on imitation learning where the expert distribution $p_E(s, a)$ is directly inferred from the expert data. GAIL (Ho & Ermon, 2016) showed that the inverse RL objective is the dual of state-matching. f-MAX (Ghasemipour et al., 2019) uses f-divergence as a metric to match the agent's state-action visitation distribution $p_{\pi}(s, a)$ and $p_E(s, a)$ . Ke et al. (2019); Ghasemipour et al. (2019) shows how several commonly used imitation learning methods can be reduced to a divergence minimization. But all of these methods optimize a lower bound of the divergence which is essentially a min-max bilevel optimization objective. They break the min-max into two parts, fitting the density model to obtain a reward that can be used for policy optimization. But these rewards depend on the policy, and should not be used by RL algorithms that assume stationary rewards. f-IRL (Ni et al., 2020) escapes the min-max objective but learns a reward function that can be used for policy optimization. We do not aim to learn a reward function but rather directly optimize for a policy using dense signals from an $f$ -divergence objective. + +In reinforcement learning, the connections between entropy regularized MaxEnt RL and the minimization of reverse KL between agent's trajectory distribution, $p_{\pi}(\tau)$ , and the "optimal" trajectory distribution, $p^{*}(\tau) \propto e^{r(\tau)}$ has been extensively studied Ziebart (2010); Ziebart et al. (2008); Kappen et al. (2012); Levine (2018); Haarnoja et al. (2018). MaxEnt RL optimizes for a policy with maximum entropy but such a policy does not guarantee maximum coverage of the state space. Hazan et al. (2018) discusses an objective for maximum exploration that focuses on maximizing the entropy of the state-visitation distribution or KL divergence between the state-visitation distribution and a uniform distribution. A few works like Durugkar et al. (2023, 2021); Ma et al. (2022), that have explored state-matching for reinforcement learning, have been discussed above. Several works like (Belousov & Peters, 2018, 2019; Touati et al., 2020) have used divergence to constraint the policy improvement steps making the updates more stable. + +Limitations of Markov Rewards. Our work looks beyond the maximization of a Markov reward for policy optimization. The learning signals that we use are non-stationary. We thus discuss the limitations of using Markov rewards for obtaining the optimal policy. There have been works (Abel et al., 2021; Clark & Amodei, 2016; Icarte et al., 2018, 2021) that express the difficulty in using Markov rewards. Abel et al. (2021) proves that there always exist environment-task pairs that cannot be described using Markov rewards. Reward Machines (Icarte et al., 2018) create finite automata to specify reward functions and can specify Non-Markov rewards as well but these are hand-crafted. + +# 4 $f$ -Policy Gradient + +In this paper, we derive an algorithm where the agents learn by minimizing the following $f$ -divergence: + +$$ +J (\theta) = D _ {f} \left(p _ {\theta} (s) \mid \mid p _ {g} (s)\right) \tag {4} +$$ + +In this section, we shall derive an algorithm to minimize $J(\theta)$ and analyze the objective more closely in the subsequent section. Unlike f-max (Ghasemipour et al., 2019), we directly optimize $J(\theta)$ . We differentiate $J(\theta)$ with respect to $\theta$ to get this gradient. We follow a similar technique as Ni et al. (2020) to obtain the analytical gradient of $J(\theta)$ . + +Theorem 4.1. The gradient of $J(\theta)$ as defined in Equation 4 is given by, + +$$ +\nabla_ {\theta} J (\theta) = \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \right] \left[ \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} \left(s _ {t}\right)}{p _ {g} \left(s _ {t}\right)}\right) \right] \right]. \tag {5} +$$ + +The gradient looks exactly like policy gradient with rewards $-f'\left(\frac{p_{\theta}(s_t)}{p_g(s_t)}\right)$ . However, this does not mean that we are maximizing $J^{RL}(\theta) = \mathbb{E}_{\tau \sim p_{\theta}(\tau)}\left[-f'\left(\frac{p_{\theta}(s_t)}{p_g(s_t)}\right)\right]$ . This is because the gradient of $J^{RL}(\theta)$ is not the same as $\nabla_{\theta}J(\theta)$ . For Dirac goal distributions, the gradient in Equation 5 cannot be + +used (as $f' \left( \frac{p_{\theta}(s_t)}{p_g(s_t)} \right)$ will not be defined when $p_g(s_t) = 0$ ). We can use the definition of $f$ -divergence in Equation 3 to derive a gradient for such distributions. + +The gradient is obtained in terms of the state visitation frequencies $\eta_{\tau}(s)$ . Further examination of the gradient leads to the following theorem, + +Theorem 4.2. Updating the policy using the gradient (Equation 5) maximizes $\mathbb{E}_{p\theta}[\eta_{\tau}(g)]$ + +Theorem 4.2 provides another perspective for $f$ -Policy Gradient - $\eta_{\tau}(g)$ is equivalent to the expected return for a goal-based sparse reward, hence optimizing the true goal-conditioned RL objective. We shall prove the optimality of the policy obtained from minimizing $J(\theta)$ in the next section. + +In practice, a Dirac goal distribution can be approximated by clipping off the zero probabilities at $\epsilon$ , similar to Laplace correction. Doing so, we will be able to use dense signals from the gradient in Equation 5 while still producing the optimal policy. This approximation is different from simply adding an $\epsilon$ reward at every state. This is because the gradients are still weighed by $f'\left(\frac{p_{\theta}(s_t)}{\epsilon}\right)$ which depends on $p_{\theta}(s_t)$ . + +Simply optimizing $J(\theta)$ is difficult because it faces similar issues to REINFORCE (Williams & Peng, 1991). A major shortcoming of the above gradient computation is that it requires completely on-policy updates. This requirement will make learning sample inefficient, especially when dealing with any complex environments. However, there have been a number of improvements to naïve policy gradients that can be used. One approach is to use importance sampling (Precup, 2000), allowing samples collected from a previous policy $\pi_{\theta'}$ to be used for learning. To reap the benefits of importance sampling, we need the previous state-visitation distributions to compute $f' \left( \frac{p_{\theta}(s)}{p_g(s)} \right)$ . Hence, we need to ensure that the current policy does not diverge much from the previous policy. This condition is ensured by constraining the KL divergence between the current policy and the previous policy. We use the clipped objective similar to Proximal Policy Optimization (Schulman et al., 2017), which has been shown to work well with policy gradients. PPO has shown that the clipped loss works well even without an explicit KL constraint in the objective. The gradient used in practice is, + +$$ +\nabla_ {\theta} J (\theta) = \mathbb {E} _ {s _ {t}, a _ {t} \sim p _ {\theta^ {\prime}} (s _ {t}, a _ {t})} \left[ \min \left(r _ {\theta} \left(s _ {t}\right) F _ {\theta^ {\prime}} \left(s _ {t}\right), c l i p \left(r _ {\theta} \left(s _ {t}\right), 1 - \epsilon , 1 + \epsilon\right) F _ {\theta^ {\prime}} \left(s _ {t}\right)\right) \right] \tag {6} +$$ + +where $r_{\theta}(t) = \frac{\pi_{\theta}(a_t|s_t)}{\pi_{\theta'}(a_t|s_t)}$ and $F_{\theta '}(s_t) = \sum_{t' = t}^{T}\gamma^{t'}f'\left(\frac{p_{\theta'}(s_t)}{p_g(s_t)}\right)$ . The derivation for this objective is provided in Appendix B. $\gamma$ is added to improve the stability of gradients and to prevent the sum of $f^{\prime}\left(\frac{p_{\theta^{\prime}}(s_t)}{p_g(s_t)}\right)$ from exploding. + +For the purpose of this paper, we use kernel density estimators to estimate the goal distribution and the agent's state visitation distribution. We may also use discriminators to estimate the ratio of these densities like Ho & Ermon (2016); Fu et al. (2017); Ghasemipour et al. (2019). But unlike these methods, we will not be incorrectly breaking a minmax objective. In our case, the estimate of the gradient requires the value of the ratio of the two distributions and does not make any assumptions about the stationarity of these values. While the adversarial methods break the minmax objective and assume the discriminator + +to be fixed (and rewards stationary) during policy optimization. + +# Algorithm 1 $f$ -PG + +Let, $\pi_{\theta}$ be the policy, $G$ be the set of goals, $B$ be a buffer +for $i = 1$ to num_iter do + $B \gets []$ +for $j = 1$ to num_traj_per_iter do +Sample $g$ , set $p_g(s)$ +Collect goal conditioned trajectories, $\tau : g$ +Fit $p_{\theta}(s)$ using KDE on $\tau$ +Store $f' \left( \frac{p_{\theta}(s)}{p_g(s)} \right)$ for each $s$ in $\tau$ $B \gets B + \{\tau : g\}$ +end for +for $j = 1$ to num_policy Updates do + $\theta \gets \theta - \alpha \nabla_{\theta} J(\theta)$ (Equation 6) +end for + +# 5 Theoretical analysis of $f$ -PG + +In this section, we will first show that minimizing the f-divergence between the agent's state visitation distribution and goal distribution yields the optimal policy. We will further analyze the connections to metric based shaping rewards and implicit exploration boost from the learning signals. For the rest + +of the paper, we will refer to $f$ -PG using FKL divergence as $fkl$ -PG, $f$ -PG using RKL divergence as $rkl$ -PG and so on. + +# 5.1 Analysis of $J(\theta)$ + +This section shows that the policy obtained by minimizing an $f$ -divergence between the agent's state visitation distribution and the goal distribution is the optimal policy. + +Theorem 5.1. The policy that minimizes $D_{f}(p_{\pi}||p_{g})$ for a convex function $f$ with $f(1) = 0$ and $f'(\infty)$ being defined, is the optimal policy. + +The proof for Theorem 5.1 is provided in Appendix A. The Theorem states that the policy obtained by minimizing the $f$ -divergence between the agent's state-visitation distribution and the goal distribution is the optimal policy for a class of convex functions defining the $f$ -divergence with $f'(\infty)$ defined. It thus makes sense to minimize the $f$ -divergence between the agent's visitation and the goal distribution. It must be noted that the objective does not involve maximizing a reward function. Note that the condition that $f'(\infty)$ is defined is not true for all $f$ -divergences. The common $f$ -divergences like RKL, TV, and JS have $f'(\infty)$ defined $rkl$ -PG, $tv$ -PG, and $js$ -PG will produce the optimal policy. + +Forward KL divergence (FKL) has $f = u\log u$ and so does not have $f^{\prime}(\infty)$ defined. Does this mean that the policy obtained by minimizing the FKL divergence is not optimal? Lemma 5.1 (proof in Appendix A) shows that the policy obtained maximizes the entropy of the agent's state-visitation distribution along with maximizing a reward of $\log p_{g}(s)$ . + +Lemma 5.1. $fkl - PG$ produces a policy that maximizes the reward $\log p_{g}(s)$ along with the entropy of the state-visitation distribution. + +A similar result can be shown for $\chi^2$ -divergence as well. It must be understood that Lemma 5.1 does not mean that $fkl$ -PG is the same as the commonly studied MaxEnt RL. + +Differences from MaxEnt RL: MaxEnt RL, as studied in Haarnoja et al. (2017, 2018), maximizes the entropy of the policy along with the task reward to achieve better exploration. However, maximizing the entropy of the policy does not imply maximum exploration. Hazan et al. (2018) shows that maximizing the entropy of the state-visitation distribution provably provides maximum exploration. Lemma 5.1 shows that $fkl$ -PG maximizes the entropy of the state-visitation distribution along with the reward making it better suited for exploration. To distinguish our work, we call the MaxEnt RL, as discussed in works like Haarnoja et al. (2017, 2018), as $\pi$ -MaxEnt RL because it only focuses on the entropy of the policy. On the other hand, $fkl$ -PG maximizes the entropy of the state-visitation distribution so we call it state-MaxEnt RL or s-MaxEnt RL. Similarly, sa-MaxEnt RL can be defined to maximize the entropy of the state-action visitation distribution. + +![](images/f4ab49e02ec7bb7ead8b18708210b2ea656b5878b3b8e4371dc0d400e0a59e69.jpg) +(a) $s$ -MaxEnt RL + +![](images/c042f77c726f32b6785659633cdf835953f66990a3ce47ce8b1bc5dc5be44c22.jpg) +(b) $\pi$ -MaxEnt RL +Figure 1: Comparison of the evolution state-visitation distributions with training for $\pi$ -MaxEnt RL and $s$ -MaxEnt RL. The darker regions imply lower visitation while the bright regions imply higher visitations. + +Since the agent's state visitation distribution depends on both the policy and the dynamics, simply increasing the entropy of the policy (without considering the dynamics) will not ensure that the agent will visit most of the states or will have a state-visitation distribution with high entropy. In Figure 1, we compare the efficiencies of $\pi$ -MaxEnt RL and $s$ -MaxEnt RL to explore around a wall in a discrete gridworld. The initial and the goal distributions (highlighted in green and red respectively) + +![](images/01e8b0b1f015728e9c545d3c060a5dff22934c7948b15cda5a26a19d136f0afe.jpg) + +![](images/1cdd1ed0787d08427ecce06b69859e09d8577c33cb38935085b84a03049f95d1.jpg) + +![](images/352bf7cd2b8f29ffec3475b6b34df49904121a6c6db879cc186449e601568701.jpg) + +![](images/8ff0d14020f4c9c8a5f0a6adce48c277c7800f9fae7ee6fd2e7f404b3d185b8f.jpg) + +![](images/764aba6d4265e9b12240b7375caac36ce50bc2eb3b2b413c5aa89ca887667cb5.jpg) + +![](images/12e7ae56e38619324d47cabd4ac96456c6e846072677d886ec0e1580e46c3879.jpg) +Figure 2: Evolution of $-f^{\prime}(\frac{p_{\theta}(s)}{p_g(s)})$ for $f = u\log u$ through policy learning. Top: $f^{\prime}(\frac{p_{\theta}(s)}{p_g(s)})$ , darker blue are relatively lower values (higher values for the learning signal) while red corresponds to high values (lower values for the learning signal). Bottom: Corresponding state-visitation of the policy. + +![](images/b8f477da074dcb3231b8a63801ecbab3f6b236825538a3021524080208301296.jpg) + +![](images/53fbeecd60c4de0fed73f5975f6295fcda2d13ec9aa55eae1066058e518746ff.jpg) + +![](images/481b50bc8c99376fd43f776841cf7f250bc8d35ba8af471998a4b682a892bf9a.jpg) + +![](images/896449534a02e3010b613c0468770ab1aea32fbb546a441395a97b423eacf4f4.jpg) + +are separated by a wall. This environment is further discussed in Section 6.1 and Appendix C. Figure 1 shows the evolution of the agent's state-visitation distribution with training for $s$ -MaxEnt RL (fkl-PG) and $\pi$ -MaxEnt RL (Soft Q Learning (Haarnoja et al., 2017)) + +Metric-based Shaping Reward: A deeper look into Lemma 5.1 shows that an appropriate choice of $p_g(s)$ can lead to entropy maximizing policy optimization with metric-based shaping rewards. Define the goal distribution as $p_g(s) = e^{f(s;g)}$ where $f(s;g)$ captures the metric of the underlying space. Then the $fkl$ -PG objective becomes, + +$$ +\min D _ {F K L} \left(p _ {\theta}, p _ {g}\right) = \max \mathbb {E} _ {p _ {\theta}} [ f (s; g) ] - \mathbb {E} _ {p _ {\theta}} [ \log p _ {\theta} ]. \tag {7} +$$ + +The above objective maximizes the reward $f(s; g)$ along with the entropy of the agent's state visitation distribution. For an L2 Euclidean metric, $f(s; g)$ will be $-||s - g||_2^2$ which is the L2 shaping reward, and the goal distribution will be Gaussian. If the goal distribution is Laplacian, the corresponding shaping reward will be the L1 norm. + +AIM (Durugkar et al., 2021) used a potential-based shaping reward based on a time step quasimetric. If we define $f(s;g)$ as a Lipschitz function for the time step metric maximizing at $s = g$ , we end up optimizing for the AIM reward along with maximizing the entropy of the state-visitation distribution. + +# 5.2 Analysis of the learning signals + +$f$ -PG involves a learning signal $f^{\prime}\left(\frac{p_{\theta}(s)}{p_g(s)}\right)$ to weigh the gradients of log probabilities of the policy. Since we are minimizing the objective (in contrast to policy gradients) the visitation will be pushed towards states with lower values of the learning signal. It is thus important to understand how $f^{\prime}\left(\frac{p_{\theta}(s)}{p_g(s)}\right)$ behaves for goal-conditioned RL settings. During the initial stages of training, the agent visits regions with very low $p_{g}$ . For such states, the signal has a higher value compared to the states that have lower $p_{\theta}$ , i.e., the unexplored states. This is because for any convex function $f$ , $f^{\prime}(x)$ is an increasing function, so minimizing $f^{\prime}\left(\frac{p_{\theta}(s)}{p_g(s)}\right)$ will imply minimizing $p_{\theta}(s)$ for the states with low $p_{g}(s)$ . The only way to do this is to increase the entropy of the state-visitation distribution, directly making the agent explore new states. As long as there is no significant overlap between the two distributions, it will push $p_{\theta}$ down to a flatter distribution until there is enough overlap with the goal distribution when it will pull back the agent's visitation again to be closer to the goal distribution. + +This learning signal should not be confused with reward in reinforcement learning. It is non-stationary and non-Markovian as it depends on the policy. More importantly, we are not maximizing this signal, just using it to weigh the gradients of the policy. + +In the following example, we shall use the Reacher environment (Todorov et al., 2012) to illustrate how our learning signal $(f'(\frac{p_{\theta}(s)}{p_g(s)}))$ varies as the agent learns. We will also show how this signal can push for exploration when the agent has not seen the goal yet. Consider the We fix the goal at $(-0.21, 0)$ and show how the learning signal evolves with the policy. While Figure 2 shows the evolution of $-f'(\frac{p_{\theta}(s)}{p_g(s)})$ (note the negation) for $fkl$ -PG, the rest can be found in Appendix D. + +The value of $f^{\prime}\left(\frac{p_{\theta}(s)}{p_{g}(s)}\right)$ is highest where the agent's visitation is high and lower where the agent is not visiting. $f^{\prime}\left(\frac{p_{\theta}(s)}{p_{g}(s)}\right)$ has the lowest value at the goal. As the policy converges to the optimal policy, the + +regions where the state-visitation distribution is considerably high (towards the bottom-right in the figure), the value for $f^{\prime}\left(\frac{p_{\theta}(s)}{p_{g}(s)}\right)$ decreases for those states (to still push for exploration) but its value at the goal is low enough for the policy to converge. + +# 6 Experiments + +Our experiments evaluate our new framework ( $f$ -PG) as an alternative to conventional reward maximization for goal-conditional RL. We pose the following questions: + +1. Does $f$ -PG provide sufficient signals to explore in otherwise challenging sparse reward settings? +2. How well does our framework perform compared to discriminator-based approaches? +3. Can our framework scale to larger domains with continuous state spaces and randomly generated goals? +4. How do different $f$ -divergences affect learning? + +The first two questions are answered using a toy gridworld environment. The gridworld has a goal contained in a room which poses a significant exploration challenge. We also show how the dense signal to the gradients of the policy evolves during training on a continuous domain like Reacher. To answer the third question, our framework is compared with several baselines on a 2D Maze solving task (Point Maze). Additionally, we scale to more complex tasks such as FetchReach Plappert et al. (2018) and an exploration-heavy PointMaze. + +# 6.1 Gridworld + +We use a gridworld environment to compare and visualize the effects of using different shaping rewards for exploration. We discussed this environment briefly in Section 5.1. The task is for the agent to reach the goal contained in a room. The only way to reach the goal is to go around the wall. The task reward is 1 when the agent reaches the room otherwise it is 0. The state space is simply the $(x, y)$ coordinates of the grid and the goal is fixed. A detailed description of the task is provided in Appendix C. Although the environment seems simple, exploration here is very difficult as there is no incentive for the agent to go around the wall. + +![](images/1dd4dc87eee0ad0a988a6e3479f454fb12ae857a9b5505f744d6751877482774.jpg) +(a) $fkl$ -PG + +![](images/3bff903566b5eb9d25d42573e0bb50b43e7bb98d920047befc12bdff47fdde88.jpg) +(b) $rkl$ -PG + +![](images/d070ef6db05ff2eed27b0545edc092cc202edc2bf99a3f35c92e5b718e2b709f.jpg) +(c) AIM +Figure 3: Gridworld: The agent needs to move from the green circle to the red circle. The state visitations of the policies (after 500 policy updates) are shown when using our framework for training $(fkl, rkl)$ compared with AIM and GAIL trained on top of soft Q learning. + +![](images/94ba524a7e9989ee0c3b390729560b53eecde9e1b33346d62264baefb145e825.jpg) +(d) GAIL + +Our framework is compared against AIM (Durugkar et al., 2021), which initially introduced this environment and uses a shaping reward obtained from state-matching to solve it, and GAIL (Ho & Ermon, 2016), which uses a discriminator to learn the probability of a state being the goal state. We provide a comparison to other recent methods in Appendix C. All the baselines are implemented on top of Soft Q Learning (Haarnoja et al., 2017) which along with maximizing the augmented rewards, also maximizes the entropy of the policy while $f$ -PG is implemented as an on-policy algorithm without any extrinsic entropy maximization objective. It can be seen from Figure 3 that, $f$ -PG can explore enough to find the way around the room which is difficult for methods like GAIL even after the entropy boost. AIM learns a potential function and can also find its way across the wall. As expected, $fkl$ -PG converges to the policy maximizing the entropy of the state visitation while $rkl$ -PG produces the optimal state visitation as expected from Theorem 5.1. This simple experiment clearly illustrates two things: (1) $f$ -PG can generate dense signals to explore the state space and search for the goal and (2) although discriminator-based methods like GAIL try to perform state-matching, they fail to explore the space well. + +# 6.2 Point Maze + +While the gridworld poses an exploration challenge, the environment is simple and has only one goal. This experiment shows that $f$ -PG scales to larger domains with continuous state space and a large set of goals. We use the Point Maze environments (Fu et al., 2020) which are a set of offline RL environments, and modify it to support our online algorithms. The state space is continuous and consists of the position and velocity of the agent and the goal. The action is the force applied in each direction. There are three variations of the environment namely PointMazeU, PointMazeMedium, PointMazeLarge. For the details of the three environments, please refer to Appendix E. + +We compare $f$ -PG with several goal-based shaping reward, (used alongside the task reward as described in Ng et al. (1999)) to optimize a PPO policy1. The rewards tried (along with their abbreviations in the plots) are AIM (Durugkar et al., 2021)(aim), GAIL (Ho & Ermon, 2016)(gail), AIRL (Fu et al., 2017)(airl) and F-AIRL (Ghasemipour et al., 2019)(fairl). All these methods employ a state-matching objective. AIM uses Wasserstein's distance while the rest use some form of $f$ -divergence. But, all of them rely on discriminators. Along with these baselines, we experiment using our learning signal as a shaping reward (fkl-rew). Additionally, we also compare with PPO being optimized by only the task reward (none). For our method, we have only shown results for fkl-PG. For the rest of the possible $f$ -divergences, refer to Section 6.4. + +![](images/75f9fc58c3a57fe7c4dda5430303ebdcba57784f5a10577cfe9c14c331a24b51.jpg) +Figure 4: Success rates (averaged over 100 episodes and 3 seeds) of $fkl$ -PG and all the baselines. $fkl$ -PG performs well in all three environments and better than the baseline shaping rewards in the two tougher environments. + +![](images/d2c4c0e5f648a8a9038e08cb549ab7a2da70a29bbc336014183a97f0e3e8d6ef.jpg) + +![](images/3aaf0819754835d723cab221d3cf2d08e71b8816efbc2fee7c50fca6ff88a0cf.jpg) + +Figure 4 (plotting mean and std-dev for 3 seeds) clearly illustrates that $fkl$ -PG is able to perform well in all three environments. In fact, it performs better than the baselines in the more difficult environments. It can also be seen that shaping rewards can often lead to suboptimal performance as none is higher than a few of the shaping rewards. As expected, the curve $fkl$ -new performs poorly. In the simpler PointMazeU environment, the performance for most of the shaping rewards are similar (along with none) but in more complex PointMazeMedium and PointMazeLarge, a lot of these shaping rewards fail. + +# 6.3 Scaling to Complex Tasks + +We scale our method to more complex tasks such as FetchReach (Plappert et al., 2018) and a difficult version of PointMaze. In the PointMaze environments used in the previous section, distributions from which the initial state and the goal are sampled, have a significant overlap easing the exploration. We modify these environments to ensure a significant distance between the sampled goal distributions and the agent's state-visitation distribution as shown in Figure 5 (top), making exploration highly challenging. Figure 5 (bottom) shows the comparison of $fkl$ -PG with GAIL (Ho & Ermon, 2016) and AIM (Durugkar et al., 2021). + +The following can be concluded from these experiments: (1) The discriminative-based methods heavily depend on coverage assumptions and fail in situations where there is no significant overlap between the goal distribution and the agent's state visitation distribution. $fkl$ -PG does not depend on any such assumptions. (2) $f$ -PG is considerably more stable than these baselines (as indicated by the variance of these methods). + +# 6.4 Comparing different $f$ -divergences + +We perform an ablation to compare different $f$ -divergences on their performances on the three Point Maze environments. Figure 6 (plotting mean and std-dev for 3 seeds) show that, empirically, $fkl$ -PG performs the best followed by $\chi^2$ -PG. Interestingly, both of these do not guarantee optimal policies + +![](images/7d06561586ab9f02129fdff6a6ad97269e0af2ce0d1fe61e7b7bd9ec3608bf61.jpg) + +![](images/434f22b831123e9bae68f9a5f9103e6675816aee662d4d4d99684381d122a6e7.jpg) + +![](images/0197bfa08cebe69d60f5e42fce32919bd7e6b73167436ebf2cd0172c4fc2685b.jpg) + +![](images/2c67a28ef77b225aef746fd00af57c3732df4007e2ca3b2ebce57079492bc5b8.jpg) + +![](images/9e06b38757c6572d745295b7773c97e086a9aff3bc6cf3d416770c2183a3cfe9.jpg) + +![](images/5c3c2836a6618381d15a0ac92c6413fc72b2732986fc2fc1cf724e080290e597.jpg) + +![](images/447f4cc59ed618790700d5cb029fc5edeee82f3fe8a20f59a6615536a30049e7.jpg) +Figure 5: (top): Description of the environments. In the PointMaze environments, the green and red shades represent the distributions from which the initial state and goal states are sampled. (bottom): Success rates (averaged over 100 episodes and 3 seeds) of $fkl$ -PG, GAIL and AIM. $fkl$ -PG outperforms these baselines with considerably lower variance. + +![](images/f7245cd3546045a9fd7199c8f234298eda00dac766fb671c4d9a6ca76a699d61.jpg) +Figure 6: Success rates (averaged over 100 episodes and 3 seeds) of $f$ -PG for different $f$ . $fkl$ -PG performs the best followed by $\chi^2$ -PG. + +![](images/6735080bfcabd0c392f8ad4d385a0f947195102bc9c6f2a5d449dcaa35dce27a.jpg) + +but it can be shown from Lemma 5.1 that $fkl$ -PG converges to the policy that along with maximizing for a "reward", maximizes the entropy of the state-visitation. A similar result can be shown for $\chi^2$ as well (proof in the Appendix A). This result can be explained by the need for exploration in the larger mazes, hence learning policies to keep the entropy of the state visitation high. + +# 7 Discussion + +This paper derives a novel framework for goal-conditioned RL in the form of an on-policy algorithm $f$ -policy gradients which minimizes the $f$ -divergence between the agent's state visitation and the goal distribution. It proves that for certain $f$ -divergences, we can recover the optimal policy while for some, we obtain a policy maximizing the entropy of the state-visitation. Entropy-regularized policy optimization (s-MaxEnt RL) for metric-based shaping rewards can be shown as a special case of $f$ -PG where $f$ is $fkl$ . $f$ -PG can provide an exploration bonus when the agent has yet not seen the goal. We demonstrated that $f$ -PG can scale up to complex domains. + +Through this work, we introduce a new perspective for goal-conditioned RL. By circumventing rewards, $f$ -PG can avoid issues that arise with reward misspecification (Knox et al., 2021). There are several avenues to focus on for future work. First, the current framework is on-policy and poses an exploration challenge. An avenue for future work could be to develop an off-policy way to solve the objective. Second, this paper does not tackle goal distributions with several modes. Such a target distribution would be interesting to tackle in future work. + +# 8 Acknowledgements + +This work was in part supported by Cisco Research. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Cisco Research. +This work has partially taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (FAIN-2019844, NRT-2125858), ONR (N00014-18-2243), ARO (E2061621), Bosch, Lockheed Martin, and UT Austin's Good Systems grand challenge. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. + +# References + +David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, and Satinder Singh. On the expressivity of markov reward. CoRR, abs/2111.00876, 2021. URL https://arxiv.org/abs/2111.00876. +Jose A. Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. Rudder: Return decomposition for delayed rewards, 2019. +Andrew G. Barto. Intrinsic Motivation and Reinforcement Learning, pp. 17-47. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013. ISBN 978-3-642-32375-1. doi: 10.1007/978-3-642-32375-1_2. URL https://doi.org/10.1007/978-3-642-32375-1_2. +Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation, 2016. +Boris Belousov and Jan Peters. f-divergence constrained policy improvement. CoRR, abs/1801.00056, 2018. URL http://arxiv.org/abs/1801.00056. +Boris Belousov and Jan Peters. Entropic regularization of markov decision processes. CoRR, abs/1907.04214, 2019. URL http://arxiv.org/abs/1907.04214. +Serena Booth, Julie Shah, Scott Niekum, Peter Stone, and Alessandro Allievi. The perils of trial-and-error reward design: misdesign through overfitting and invalid task specifications. 2023. +Jack Clark and Dario Amodei. Faulty reward functions in the wild, 2016. URL https://openai.com/research/faulty-reward-functions. +Ishan Durugkar, Maurizio Tec, Scott Niekum, and Peter Stone. Adversarial intrinsic motivation for reinforcement learning. CoRR, abs/2105.13345, 2021. URL https://arxiv.org/abs/2105.13345. +Ishan Durugkar et al. Estimation and control of visitation distributions for reinforcement learning. PhD thesis, 2023. +Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine. C-learning: Learning to achieve goals via recursive classification. CoRR, abs/2011.08909, 2020. URL https://arxiv.org/abs/2011.08909. +Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov. Replacing rewards with examples: Example-based policy search via recursive classification. CoRR, abs/2103.12656, 2021. URL https://arxiv.org/abs/2103.12656. +Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J. R. Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, David Silver, Demis Hassabis, and Pushmeet Kohli. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 610(7930):47-53, 2022. doi: 10.1038/s41586-022-05172-4. +Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. CoRR, abs/1710.11248, 2017. URL http://arxiv.org/abs/1710.11248. + +Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4RL: datasets for deep data-driven reinforcement learning. CoRR, abs/2004.07219, 2020. URL https://arxiv.org/abs/2004.07219. +Theophile Gervet, Soumith Chintala, Dhruv Batra, Jitendra Malik, and Devendra Singh Chaplot. Navigating to objects in the real world. Science Robotics, 8(79):eadb6991, 2023. doi: 10.1126/scirobotics.adf6991. URL https://www.science.org/doi/abs/10.1126/scirobotics.adf6991. +Seyed Kamyar Seyed Ghasemipour, Richard S. Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. CoRR, abs/1911.02256, 2019. URL http://arxiv.org/abs/1911.02256. +Prasoon Goyal, Scott Niekum, and Raymond J. Mooney. Using natural language for reward shaping in reinforcement learning. CoRR, abs/1903.02020, 2019. URL http://arxiv.org/abs/1903.02020. +Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. CoRR, abs/1910.11956, 2019. URL http://arxiv.org/abs/1910.11956. +Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. CoRR, abs/1702.08165, 2017. URL http://arxiv.org/abs/1702.08165. +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. CoRR, abs/1801.01290, 2018. URL http://arxiv.org/abs/1801.01290. +Elad Hazan, Sham M. Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. CoRR, abs/1812.02690, 2018. URL http://arxiv.org/abs/1812.02690. +Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. CoRR, abs/1606.03476, 2016. URL http://arxiv.org/abs/1606.03476. +Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, and Sheila McIlraith. Using reward machines for high-level task specification and decomposition in reinforcement learning. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2107-2116. PMLR, 10-15 Jul 2018. URL https://proceedings.mlr.press/v80/icarte18a.html. +Rodrigo Toro Icarte, Ethan Waldie, Toryn Q. Klassen, Richard Anthony Valenzano, Margarita P. Castro, and Sheila A. McIlraith. Learning reward machines: A study in partially observable reinforcement learning. CoRR, abs/2112.09477, 2021. URL https://arxiv.org/abs/2112.09477. +John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon Kohl, Andrew Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, and Demis Hassabis. Highly accurate protein structure prediction with alphafold. Nature, 596:1-11, 08 2021. doi: 10.1038/s41586-021-03819-2. +Leslie Pack Kaelbling. Learning to achieve goals. In *IJCAI*, pp. 1094–1099. Citeseer, 1993. +Hilbert J. Kappen, Vicenç Gómez, and Manfred Opper. Optimal control as a graphical model inference problem. Machine Learning, 87(2):159-182, feb 2012. doi: 10.1007/s10994-012-5278-7. URL https://doi.org/10.1007%2Fs10994-012-5278-7. +Liyiming Ke, Matt Barnes, Wen Sun, Gilwoo Lee, Sanjiban Choudhury, and Siddhartha S. Srinivasa. Imitation learning as f-divergence minimization. CoRR, abs/1905.12888, 2019. URL http://arxiv.org/abs/1905.12888. +Heechel Kim, Yoshiyuki Ohmura, and Yasuo Kuniyoshi. Robot peels banana with goal-conditioned dual-action deep imitation learning, 2022. + +W. Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, and Peter Stone. Reward (mis)design for autonomous driving. CoRR, abs/2104.13906, 2021. URL https://arxiv.org/abs/2104.13906. +Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. CoRR, abs/1805.00909, 2018. URL http://arxiv.org/abs/1805.00909. +Xiao-Yang Liu, Hongyang Yang, Jiechao Gao, and Christina Dan Wang. FinRL. In Proceedings of the Second ACM International Conference on AI in Finance. ACM, nov 2021. doi: 10.1145/3490354.3494366. URL https://doi.org/10.1145/2F3490354.3494366. +Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, and Osbert Bastani. How far i'll go: Offline goal-conditioned reinforcement learning via $f$ -advantage regression, 2022. URL https://arxiv.org/abs/2206.03023. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013. URL http://arxiv.org/abs/1312.5602. +Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. ArXiv, abs/1906.04733, 2019. +A. Ng, Daishi Harada, and Stuart J. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In International Conference on Machine Learning, 1999. +Tianwei Ni, Harshit S. Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Benjamin Eysenbach. f-irl: Inverse reinforcement learning via state marginal matching. CoRR, abs/2011.04709, 2020. URL https://arxiv.org/abs/2011.04709. +Scott Niekum. Evolved intrinsic reward functions for reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1):1955-1956, Jul. 2010. doi: 10.1609/aaai.v24i1.7772. URL https://ojs.aaai.org/index.php/AAAI/article/view/7772. +OpenAI, Matthias Plappert, Raul Sampedro, Tao Xu, Ilge Akkaya, Vineet Kosaraju, Peter Welinder, Ruben D'Sa, Arthur Petron, Henrique Ponde de Oliveira Pinto, Alex Paino, Hyeonwoo Noh, Lilian Weng, Qiming Yuan, Casey Chu, and Wojciech Zaremba. Asymmetric self-play for automatic goal discovery in robotic manipulation. CoRR, abs/2101.04882, 2021. URL https://arxiv.org/abs/2101.04882. +Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. +Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chogiej, Peter Welinder, Vikash Kumar, and Wojciech Zaremba. Multi-goal reinforcement learning: Challenging robotics environments and request for research, 2018. +Yury Polyanskiy and Yihong Wu. "Information Theory From Coding to Learning". Cambridge University Press, 2022. URL https://people.lids.mit.edu/yp/homepage/data/ itbook-export.pdf. +Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, pp. 80, 2000. +Martin L Puterman. Markov decision processes. *Handbooks in operations research and management science*, 2:331-434, 1990. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347. + +Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, and Sergey Levine. Ving: Learning open-world navigation with visual goals. CoRR, abs/2012.09812, 2020. URL https://arxiv.org/abs/2012.09812. +David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Simonyan, and Demis Hassabis. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. CoRR, abs/1712.01815, 2017. URL http://arxiv.org/abs/1712.01815. +David Silver, Satinder Singh, Doina Precup, and Richard S. Sutton. Reward is enough. Artificial Intelligence, 299:103535, 2021. ISSN 0004-3702. doi: https://doi.org/10.1016/j.artint.2021.103535. URL https://www.sciencedirect.com/science/article/pii/S0004370221000862. +Satinder Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous Mental Development, 2(2):70-82, 2010. doi: 10.1109/TAMD.2010.2051031. +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012. doi: 10.1109/IROS.2012.6386109. +Ahmed Touati, Amy Zhang, Joelle Pineau, and Pascal Vincent. Stable policy optimization via off-policy divergence regularization. CoRR, abs/2003.04108, 2020. URL https://arxiv.org/abs/2003.04108. +Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241-268, 1991. +Peter R Wurman, Samuel Barrett, Kenta Kawamoto, James MacGlashan, Kaushik Subramanian, Thomas J Walsh, Roberto Capobianco, Alisa Devlic, Franziska Eckert, Florian Fuchs, et al. Outracing champion gran turismo drivers with deep reinforcement learning. Nature, 602(7896): 223-228, 2022. +Zeyu Zheng, Junhyuk Oh, and Satinder Singh. On learning intrinsic rewards for policy gradient methods. CoRR, abs/1804.06459, 2018. URL http://arxiv.org/abs/1804.06459. +Brian D. Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010. +Brian D. Ziebart, Andrew L. Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In Dieter Fox and Carla P. Gomes (eds.), AAAI, pp. 1433-1438. AAAI Press, 2008. ISBN 978-1-57735-368-3. URL http://dblp.uni-trier.de/db/conf/aaai/aaai2008.html#ZiebartMBD08. + +# Appendix + +A Analysis of $J(\theta)$ 16 + +A.1 Proof for Theorem 5.1 16 +A.2 Proof for Lemma 5.1 17 + +B Gradient based optimization 18 + +B.1 Derivation of gradients 18 +B.2 Practical Algorithm 20 +B.3 Discounted State-Visitations 21 + +C Gridworld Experiments 21 + +C.1 Description of the task 21 +C.2 Performance of $f$ -PG 21 + +D Visualizing the learning signals 21 + +D.1 Description of the task 21 +D.2 Comparing different $f$ -PG 22 + +E PointMaze experiments 23 + +# A Analysis of $J(\theta)$ + +In this section, we will present the proofs for all the Lemmas and Theorems stated in Section 5.1. + +# A.1 Proof for Theorem 5.1 + +To prove Theorem 5.1, we need the following Lemmas. Lemma A.1 states that among all policies, the optimal policy has the highest state visitation at the goal. + +Lemma A.1. Let $\mathcal{D}$ be the set of all possible state visitations for the agent following some policy $\pi \in \Pi$ . Let $\pi^{*}$ be the optimal goal-conditioned policy. This optimal policy's state-visitation distribution will have the most measure at the goal for all $p_{\pi} \in \mathcal{D}$ i.e., $\pi^{*} \Longrightarrow p_{\pi^{*}}(g) \geq p_{\pi}(g), \forall p_{\pi} \in \mathcal{D}$ . + +Proof. Let $\pi^{*}$ be the optimal policy and $p_{\pi^{*}}$ be the corresponding state visitation distribution. The reward for the sparse setting is designed as, + +$$ +r (s) = \left\{ \begin{array}{l l} 1 & s = g, \\ 0 & \text {o t h e r w i s e}. \end{array} \right. +$$ + +Hence the expected return for a policy $\pi$ is $R_{\pi}$ is + +$$ +\begin{array}{l} R _ {\pi} = \mathbb {E} _ {p _ {\pi}} [ r (s) ] \\ = p _ {\pi} (g). \\ \end{array} +$$ + +The return for the optimal policy is maximum among all policies so $R_{\pi^*} \geq R_{\pi}, \forall \pi \in \Pi$ . This implies $p_{\pi^*}(g) \geq p_{\pi}(g), \forall p_{\pi} \in \mathcal{D}$ . + +Lemma A.2 states that the f-divergence between $p_{\pi}(s)$ and $p_g(s)$ is a decreasing function with respect to $p_{\pi}(s)$ . This means that as the objective $J(\theta)$ obtains its minimum value when $p_{\pi}(g)$ is highest. + +Lemma A.2. $D_{f}(p_{\pi}(\cdot)||p_{g}(\cdot))$ is a decreasing function with respect $p_{\pi}(g)\forall f$ if $f^{\prime}(\infty)$ is defined. + +Proof. The goal distribution is assumed to be a Dirac distribution i.e., $p_{g}(s) = 1$ if $s = g$ and 0 everywhere else. The $f$ -divergence between the agent state-visitation distribution, $p_{\pi}$ and the goal distribution, $p_{g}$ can be defined as, + +$$ +\begin{array}{l} D _ {f} \left(p _ {\pi} \mid \mid p _ {g}\right) = \sum_ {p _ {g} > 0} \left[ p _ {g} (s) f \left(\frac {p _ {\pi} (s)}{p _ {g} (s)}\right) \right] + f ^ {\prime} (\infty) p _ {\pi} [ p _ {g} = 0 ] \\ = f \left(p _ {\pi} (g)\right) + f ^ {\prime} (\infty) \left(1 - p _ {\pi} (g)\right). \\ \end{array} +$$ + +Let $\mathcal{F} = D_f(p_\pi ||p_g)$ . Differentiating $\mathcal{F}$ w.r.t. $p_{\pi}(g)$ , we get $\mathcal{F}' = f'(p_{\pi}(g)) - f'(\infty)$ . Since $f$ is a convex function (by the definition of $f$ -divergence), $f'(x) \leq f'(y), \forall x \leq y$ . + +Hence, if $f'(\infty)$ is defined, $\mathcal{F}' \leq 0$ . Hence $\mathcal{F} = D_f(p_\pi || p_g)$ is a decreasing function with respect to $p_\pi(g)$ . + +Additionally, we need Lemma A.3 and Corollary 1 to complete the proof of Theorem 5.1. + +Lemma A.3. If any two policies $\pi_1$ and $\pi_2$ have the same state visitation at a given goal, they have the same returns for that goal. + +Proof. Follows directly from the definition of returns. $R_{\pi} = \mathbb{E}_{p_{\pi}}[r(s)] = p_{\pi}(g)$ . Hence two policies $\pi_1$ and $\pi_2$ with the same state visitation at the goal will have the same returns. + +Corollary 1. Any policy that can lead to the state-visitation distribution of the optimal policy $p_{\pi^*}$ is optimal. + +Proof. Directly follows from Lemma A.3. $\square$ + +Theorem 5.1. The policy that minimizes $D_{f}(p_{\pi}||p_{g})$ for a convex function $f$ with $f(1) = 0$ and $f'(\infty)$ being defined, is the optimal policy. + +Proof. Lemma A.1 proves that the optimal policy has the maximum state-visitation probability. Lemma A.2 proves that the $f$ -divergence objective decreases with increasing the state-visitation probability at the goal. In other words, to minimize the $f$ -divergence, we need to maximize the state visitation at goal. Corollary 1 further indicates that any policy that can lead to the state-visitation distribution of the optimal policy i.e., any policy that maximizes the state-visitation distribution at the goal state is an optimal policy. + +# A.2 Proof for Lemma 5.1 + +Lemma 5.1. $fkl - PG$ produces a policy that maximizes the reward $\log p_{g}(s)$ along with the entropy of the state-visitation distribution. + +Proof. For $fkl$ -PG, $f = u\log u$ . Hence, $J(\theta) = D_{f}(p_{\pi}||p_{g})$ can be written as, + +$$ +\begin{array}{l} D _ {f} (p _ {\pi} | | p _ {g}) = \mathbb {E} _ {p _ {\pi}} \left[ \log \frac {p _ {\pi}}{p _ {g}} \right] \\ = - \left[ \mathbb {E} _ {p _ {\pi}} \left[ \log p _ {g} \right] - \mathbb {E} _ {p _ {\pi}} \left[ \log p _ {\pi} \right] \right] \\ = - \left[ \mathbb {E} _ {p _ {\pi}} [ \log p _ {g} ] + \mathcal {H} (p _ {\pi}) \right] \\ \end{array} +$$ + +where $\mathcal{H}(p_{\pi})$ is the entropy of the agent's state visitation distribution. Minimizing $D_{f}(p_{\pi}||p_{g})$ will correspond to maximizing the reward $r(s) = \log p_g(s)$ and the entropy of $p_{\pi}$ . + +A similar result could be proved for $\chi^2$ divergence: + +Lemma A.4. If $f(u) = (u - 1)^2 (\chi^2 \text{ divergence}), D_f(p_\pi || p_g)$ is the upper bound of $D_{FKL}(p_\pi || p_g) - 1$ . Hence minimizing $D_{\chi^2}$ will also minimize $D_{FKL}$ recovering the entropy regularized policy. + +Proof. With $f = (u - 1)^2$ , $D_f(p_\pi || p_g)$ can be written as, + +$$ +\begin{array}{l} D _ {f} \left(p _ {\pi} \mid \mid p _ {g}\right) = \int p _ {g} (s) \left(\frac {p _ {\pi} (s)}{p _ {g} (s)} - 1\right) ^ {2} d s \\ = \int p _ {g} (s) \left(\left(\frac {p _ {\pi} (s)}{p _ {g} (s)}\right) ^ {2} - 2 \frac {p _ {\pi} (s)}{p _ {g} (s)} + 1\right) d s \\ = \int p _ {\pi} (s) \frac {p _ {\pi} (s)}{p _ {g} (s)} - 2 p _ {\pi} (s) + p _ {g} (s) d s \\ = \int p _ {\pi} (s) \frac {p _ {\pi} (s)}{p _ {g} (s)} d s - 1 \\ = \mathbb {E} _ {p _ {\pi} (s)} \left[ \frac {p _ {\pi} (s)}{p _ {g} (s)} \right] - 1 \\ \end{array} +$$ + +Since, $x > \log x$ + +$$ +\begin{array}{l} \Longrightarrow \mathbb {E} _ {p _ {\pi} (s)} [ x ] > \mathbb {E} _ {p _ {\pi} (s)} [ \log x ] \\ \Longrightarrow \mathbb {E} _ {p _ {\pi} (s)} \left[ \frac {p _ {\pi} (s)}{p _ {g} (s)} \right] > \mathbb {E} _ {p _ {\pi} (s)} \left[ \log \frac {p _ {\pi} (s)}{p _ {g} (s)} \right] \\ \Longrightarrow \mathbb {E} _ {p _ {\pi} (s)} \left[ \frac {p _ {\pi} (s)}{p _ {g} (s)} \right] - 1 > \mathbb {E} _ {p _ {\pi} (s)} \left[ \log \frac {p _ {\pi} (s)}{p _ {g} (s)} \right] - 1 \\ \end{array} +$$ + +Minimizing LHS will also minimize RHS. RHS is essentially $D_{KL}(p_{\pi}||p_g) - 1$ . The $^{-1}$ will not have any effect on the minimization of $D_{KL}(p_{\pi}||p_g)$ . + +# B Gradient based optimization + +# B.1 Derivation of gradients + +Theorem 4.1. The gradient of $J(\theta)$ as defined in Equation 4 is given by, + +$$ +\nabla_ {\theta} J (\theta) = \frac {1}{T} \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \right] \left[ \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} \left(s _ {t}\right)}{p _ {g} \left(s _ {t}\right)}\right) \right] \right]. \tag {8} +$$ + +Proof. We follow the proof from Ni et al. (2020). Lets start with the state-visitation distribution. In Section 2, it was shown that the state-visitation distribution can be written as, + +$$ +\begin{array}{l} p _ {\theta} (s) \propto \int p (\tau) \Pi_ {t = 1} ^ {T} \pi_ {\theta} (s _ {t}) \eta_ {\tau} (s) d \tau \\ \Rightarrow p _ {\theta} (s) \propto \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) d \tau \\ \Longrightarrow p _ {\theta} (s) = \frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(s _ {t}\right)} \eta_ {\tau} (s) d \tau}{\int \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(s _ {t}\right)} \eta_ {\tau} (s) d \tau d s} \\ \Longrightarrow p _ {\theta} (s) = \frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(s _ {t}\right)} \eta_ {\tau} (s) d \tau}{\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} \left(s _ {t}\right)} \int \eta_ {\tau} (s) d s d \tau} \\ \Longrightarrow p _ {\theta} (s) = \frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) d \tau}{T \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} d \tau} \\ \Longrightarrow p _ {\theta} (s) = \frac {f (s)}{Z} \\ \end{array} +$$ + +where $f(s) = \int p(\tau)e^{\sum_{t=1}^{T}\log\pi_{\theta}(s_t)}\eta_{\tau}(s)d\tau$ and $Z = T\int p(\tau)e^{\sum_{t=1}^{T}\log\pi_{\theta}(s_t)}d\tau$ . + +Differentiating w.r.t. $\pi_{\theta}(s^{*})$ + +$$ +\frac {d f (s)}{d \pi_ {\theta} (s ^ {*})} = \frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \eta_ {\tau} (s ^ {*}) d \tau}{\pi_ {\theta} (s ^ {*})} +$$ + +and, + +$$ +\begin{array}{l} \frac {d Z}{d \pi_ {\theta} (s ^ {*})} = \frac {T \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s ^ {*}) d \tau}{\pi_ {\theta} (s ^ {*})} \\ = \frac {T f (s ^ {*})}{\pi_ {\theta} (s ^ {*})} \\ \end{array} +$$ + +Computing $\frac{dp_{\theta}(s)}{\pi_{\theta}(s^{*})}$ using $\frac{df(s)}{d\pi_{\theta}(s^{*})}$ and $\frac{dZ}{d\pi_{\theta}(s^{*})}$ , + +$$ +\begin{array}{l} \frac {d p _ {\theta} (s)}{\pi_ {\theta} \left(s ^ {*}\right)} = \frac {Z \frac {d f (s)}{d \pi_ {\theta} \left(s ^ {*}\right)} - f (s) \frac {d Z}{d \pi_ {\theta} \left(s ^ {*}\right)}}{Z ^ {2}} \\ = \frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \eta_ {\tau} (s ^ {*}) d \tau}{Z \pi_ {\theta} (s ^ {*})} - \frac {f (s)}{Z} T \frac {f (s ^ {*})}{Z \pi_ {\theta} (s ^ {*})} \\ = \frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \eta_ {\tau} (s ^ {*}) d \tau}{Z \pi_ {\theta} (s ^ {*})} - \frac {T}{\pi_ {\theta} (s ^ {*})} p _ {\theta} (s) p _ {\theta} (s ^ {*}) \\ \end{array} +$$ + +Now we can compute $\frac{dp_{\theta}(s)}{d\theta}$ , + +$$ +\begin{array}{l} \frac {d p _ {\theta} (s)}{d \theta} = \int \frac {d p _ {\theta} (s)}{\pi_ {\theta} (s ^ {*})} \frac {d \pi_ {\theta} (s ^ {*})}{d \theta} d s ^ {*} \\ = \int \left(\frac {\int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \eta_ {\tau} (s ^ {*}) d \tau}{Z \pi_ {\theta} (s ^ {*})} - \frac {T}{\pi_ {\theta} (s ^ {*})} p _ {\theta} (s) p _ {\theta} (s ^ {*})\right) \frac {d \pi_ {\theta} (s ^ {*})}{d \theta} d s ^ {*} \\ \end{array} +$$ + +$$ +\begin{array}{l} = \frac {1}{Z} \iint p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \eta_ {\tau} (s ^ {*}) \frac {\nabla_ {\theta} \pi_ {\theta} (s ^ {*})}{\pi_ {\theta} (s ^ {*})} d s ^ {*} d \tau - T p _ {\theta} (s) \int p _ {\theta} (s ^ {*}) \frac {\nabla_ {\theta} \pi_ {\theta} (s ^ {*})}{\pi_ {\theta} (s ^ {*})} d s ^ {*} \\ = \frac {1}{Z} \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) d \tau - T p _ {\theta} (s) \mathbb {E} _ {s \sim p _ {\theta} (s)} [ \nabla_ {\theta} \log \pi_ {\theta} (s) ] \\ \end{array} +$$ + +The objective $L(\theta) = D_{f}(p_{\theta}(s)||p_{g}(s)) = \int p_{g}(s)f\left(\frac{p_{\theta}(s)}{p_{g}(s)}\right)ds$ + +The gradient for $L(\theta)$ will be given by, + +$$ +\begin{array}{l} \nabla_ {\theta} L (\theta) = \int p _ {g} (s) f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) \left(\frac {\nabla_ {\theta} p _ {\theta} (s)}{p _ {g} (s)}\right) d s \\ = \int \nabla p _ {\theta} (s) f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) d s \\ = \int f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) \left(\frac {1}{Z} \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \eta_ {\tau} (s) \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) d \tau \right. \\ \left. - T p _ {\theta} (s) \mathbb {E} _ {s \sim p _ {\theta} (s)} \left[ \nabla_ {\theta} \log \pi_ {\theta} (s) \right]\right) d s \\ = \frac {1}{Z} \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \int \eta_ {\tau} (s) f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) d s d \tau \\ - \int T p _ {\theta} (s) f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) \mathbb {E} _ {s \sim p _ {\theta} (s)} [ \nabla_ {\theta} \log \pi_ {\theta} (s) ] d s \\ = \frac {1}{Z} \int p (\tau) e ^ {\sum_ {t = 1} ^ {T} \log \pi_ {\theta} (s _ {t})} \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) d \tau \\ - T \int p _ {\theta} (s) f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) \mathbb {E} _ {s \sim p _ {\theta} (s)} [ \nabla_ {\theta} \log \pi_ {\theta} (s) ] d s \\ = \frac {1}{T} \int p _ {\theta} (\tau) \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) d \tau \\ - T \mathbb {E} _ {s \sim p _ {\theta} (s)} \left[ f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) \right] \mathbb {E} _ {s \sim p _ {\theta} (s)} \left[ \nabla_ {\theta} \log \pi_ {\theta} (s) \right] \\ = \frac {1}{T} \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) \right] \\ - T \mathbb {E} _ {s \sim p _ {\theta} (s)} \left[ f ^ {\prime} \left(\frac {p _ {\theta} (s)}{p _ {g} (s)}\right) \right] \mathbb {E} _ {s \sim p _ {\theta} (s)} [ \nabla_ {\theta} \log \pi_ {\theta} (s) ] \\ = \frac {1}{T} \left[ \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) \right] \right. \\ \left. - \mathbb {E} _ {s \sim p _ {\theta} (s)} \left[ \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) \right] \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \right] \right] \\ = \frac {1}{T} \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s _ {t}) \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) \right] \\ \end{array} +$$ + +![](images/6e9e3102d2b17f7969e8ce939f178ddf5301df7728205409d1b9da628221447b.jpg) + +Theorem 4.2. Updating the policy using the gradient maximizes $\mathbb{E}_{p_{\theta}}[\eta_{\tau}(g)]$ + +Proof. In goal-based setting, $p^g$ is sparse, so we need to use the full definition of $f$ -divergence, $D_f(p_\theta ||p_g) = \sum_{p_g > 0} [p_g(s)f(\frac{p_\theta(s)}{p_g(s)})] + f'(\infty)p_\theta[p_g = 0] = \sum_{p_g > 0} [p_g(s)f(\frac{p_\theta(s)}{p_g(s)})] + f'(\infty)(1 - p_\theta(g))$ . Differentiating with respect to $\theta$ gives, + +$$ +\begin{array}{l} \nabla_ {\theta} L (\theta) = \left(f ^ {\prime} \left(p _ {\theta} (g)\right) - f ^ {\prime} (\infty)\right) \nabla_ {\theta} p _ {\theta} (s) \\ = \left(f ^ {\prime} \left(p _ {\theta} (g)\right) - f ^ {\prime} (\infty)\right) \left(\frac {1}{T} \int p _ {\theta} (\tau) \eta_ {\tau} (g) \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(s _ {t}\right) d \tau - T p _ {\theta} (s) \mathbb {E} _ {s \sim p _ {\theta} (s)} [ \nabla_ {\theta} l o g \pi_ {\theta} (s) ]\right) \\ = \left(f ^ {\prime} \left(p _ {\theta} (g)\right) - f ^ {\prime} (\infty)\right) \left(\frac {1}{T} \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \eta_ {\tau} (g) \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(s _ {t}\right) \right] \right. \\ \left. - p _ {\theta} (g) \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} (s) \right]\right) \\ = \frac {1}{T} \left(f ^ {\prime} \left(p _ {\theta} (g)\right) - f ^ {\prime} (\infty)\right) \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(s _ {t}\right) \eta_ {\tau} (g) \right] \\ \end{array} +$$ + +The gradient has two terms, the first term $\left(f'(p_{\theta}(g)) - f'(\infty)\right)$ weighs the gradient based on the value of $p_{\theta}(g)$ and is always negative. It acts as an adaptive learning schedule, reducing its magnitude as the $p_{\theta}(g)$ increases. The second term is the gradient of $\mathbb{E}_{p_{\theta}}[\eta_{\tau}(g)]$ . Hence using $\nabla_{\theta}L(\theta)$ , we minimize $L(\theta)$ which would imply maximizing $\mathbb{E}_{p_{\theta}}[\eta_{\tau}(g)]$ . + +# B.2 Practical Algorithm + +As mentioned in Section 4, the derived gradient is highly sample inefficient. We employ established methods to improve the performance of policy gradients like importance sampling. + +The first modification is to use importance sampling weights to allow sampling from previous policy $\theta^{\prime}$ . The gradient now looks like, + +$$ +\nabla_ {\theta} J (\theta) = \mathbb {E} _ {\tau \sim p _ {\theta^ {\prime}} (\tau)} \left[ \frac {\pi_ {\theta} (\tau)}{\pi_ {\theta^ {\prime}} (\tau)} \left[ \sum_ {t = 1} ^ {T} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \right] \left[ \sum_ {t = 1} ^ {T} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) \right] \right]. \tag {9} +$$ + +To reduce the variance in the gradients, the objective can be modified to use the causal connections in the MDP and ensure that the action taken at step $t$ only affects rewards at times $t' \to [t, T]$ . Moreover, a discount factor $\gamma$ is used to prevent the sum $\sum_{t'=t}^{T} f'\left(\frac{p_{\theta}(s_t)}{p_g(s_t)}\right)$ from exploding. + +Additionally, the expectation is modified to be over states rather than trajectories, + +$$ +\nabla_ {\theta} J (\theta) = \mathbb {E} _ {s _ {t}, a _ {t} \sim p _ {\theta^ {\prime}} \left(s _ {t}, a _ {t}\right)} \left[ \frac {\pi_ {\theta} \left(a _ {t} \mid s _ {t}\right)}{\pi_ {\theta^ {\prime}} \left(a _ {t} \mid s _ {t}\right)} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \sum_ {t ^ {\prime} = t} ^ {T} \gamma^ {t ^ {\prime}} f ^ {\prime} \left(\frac {p _ {\theta} \left(s _ {t}\right)}{p _ {g} \left(s _ {t}\right)}\right) \right]. \tag {10} +$$ + +This gradient computation is still inefficient, because even though the samples are from a previous policy $\pi_{\theta'}$ , it still needs to compute $\sum_{t'=t}^{T} \gamma^{t'} f'\left(\frac{p_{\theta}(s_t)}{p_g(s_t)}\right)$ , requiring iteration through full trajectories. We can add a bias to the gradient by modifying $f'\left(\frac{p_{\theta}(s_t)}{p_g(s_t)}\right)$ to $f'\left(\frac{p_{\theta'}(s_t)}{p_g(s_t)}\right)$ in the objective. To ensure the bias is small, an additional constraint needs to be added to keep $\theta'$ close to $\theta$ . Following the literature from natural gradients, the constraint we add is $D_{KL}(p_{\theta'}||p_{\theta})$ . Proximal Policy Optimization (Schulman et al., 2017) showed that in practical scenarios, clipped objective can be enough to do away with the KL regularization term. The final objective that we use is, + +$$ +\nabla_ {\theta} J (\theta) = \mathbb {E} _ {s _ {t}, a _ {t} \sim p _ {\theta^ {\prime}} \left(s _ {t}, a _ {t}\right)} \left[ \min \left(r _ {\theta} \left(s _ {t}\right) F _ {\theta^ {\prime}} \left(s _ {t}\right), c l i p \left(r _ {\theta} \left(s _ {t}\right), 1 - \epsilon , 1 + \epsilon\right) F _ {\theta^ {\prime}} \left(s _ {t}\right)\right) \right], \tag {11} +$$ + +where $r_{\theta}(s_t) = \frac{\nabla_\theta\pi_\theta(a_t|s_t)}{\pi_{\theta'}(s_at|s_t)}$ and $F_{\theta^{\prime}}(s_{t}) = \sum_{t^{\prime} = t}^{T}\gamma^{t^{\prime}}f^{\prime}\left(\frac{p_{\theta^{\prime}}(s_{t})}{p_{g}(s_{t})}\right)$ . + +# B.3 Discounted State-Visitations + +The state-visitation distribution defined so far has not considered a discount factor. To include discounting, the state-visitation frequency gets modified to $\eta_{\tau}(s) = \sum_{t=1}^{T} \gamma^t \mathbb{1}_{s_t=s}$ . Throughout the derivation of the gradient, we used $\int \eta_{\tau}(s) f(s) ds = \sum_{t=1}^{T} f(s_t)$ but this will be modified to $\int \eta_{\tau}(s) f(s) ds = \sum_{t=1}^{T} \gamma^t f(s_t)$ . The corresponding gradient will be, + +$$ +\nabla_ {\theta} J (\theta) = \frac {1}{T} \mathbb {E} _ {\tau \sim p _ {\theta} (\tau)} \left[ \left[ \sum_ {t = 1} ^ {T} \gamma^ {t} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \right] \left[ \sum_ {t = 1} ^ {T} \gamma^ {t} f ^ {\prime} \left(\frac {p _ {\theta} \left(s _ {t}\right)}{p _ {g} \left(s _ {t}\right)}\right) \right] \right]. \tag {12} +$$ + +This gradient can be modified as before, + +$$ +\nabla_ {\theta} J (\theta) = \mathbb {E} _ {s _ {t}, a _ {t} \sim p _ {\theta} (s _ {t}, a _ {t})} \left[ \gamma^ {t} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \sum_ {t ^ {\prime} = t} ^ {T} \gamma^ {t ^ {\prime}} f ^ {\prime} \left(\frac {p _ {\theta} (s _ {t})}{p _ {g} (s _ {t})}\right) \right]. \tag {13} +$$ + +Adding importance sampling to the gradient in Equation 13 will give a gradient very similar to Equation 10. In fact, Equation 10 is a biased estimate for the gradient of the $f$ -divergence between the discounted state-visitation distribution and the goal distribution. We can use either of the two gradients but Equation 10 will be preferred for long horizon tasks. + +# C Gridworld Experiments + +# C.1 Description of the task + +The task involves navigating a gridworld to reach the goal state which is enclosed in a room. The agent can move in any of the four directions and has no idea where the goal is. It needs to explore the gridworld to find the path around the room to reach the goal. The task is further elaborated in Figure 7. The green square represents the agent position while the red square represents the goal. + +State: The state visible to the policy is simply the normalized $x$ and $y$ coordinates. + +Action: The action is discrete categorical distribution with four categories one for each - left, top, right and bottom. + +**Reward:** The task reward is 1 at the goal and 0 everywhere else. $f$ -PG does not require rewards but the baselines use task rewards. + +![](images/635824a1f85f64f72b654f56d66fbaaa2f5a5c6819f6d84e036fdc1ff5180edd.jpg) +Figure 7: Description of the gridworld: The bold lines show the walls, green square is the start position and the red square is the goal. + +# C.2 Performance of $f$ -PG + +In Section 5.1, we had compared $fkl$ -PG and $rkl$ -PG with AIM (Durugkar et al., 2021) and GAIL (Ho & Ermon, 2016). In Figure 8 we present additional baselines AIRL (Fu et al., 2017) and f-AIRL (Ghasemipour et al., 2019). + +# D Visualizing the learning signals + +# D.1 Description of the task + +To visualize the learning signals, we use the Reacher environment (Figure 9) (Todorov et al., 2012). The task involves rotating a reacher arm (two joints with one end fixed). The applied actions (torques) would rotate the arm so that the free end reaches the goal. The goal is fixed to be at $(-0.21, 0)$ and the goal distribution is a normal centred at the goal with a standard deviation of 0.02. + +![](images/99298086fc3996ed5829e6cd23541e99fd131d543dafcd690ea6d639903a1a0e.jpg) +Figure 9: The Reacher environment with fixed goal + +![](images/7a9001c64f1016b2ad4a1c1d0ef3647f53ee46a150ebfad2bfb959bd389b9b34.jpg) + +![](images/ad907b18fb615cd8f273b8d31e15f12215000bf4664513d4f4b124a962532acd.jpg) + +![](images/4b2d2dfd42ca60610aaa255b8f8303833776d1ba8c52a6b5e3e2975b8727e670.jpg) + +![](images/1a6473e131147dbefbdf096534ea35ed01835aeffc83f28c809be2cc6f4a19a4.jpg) +(a) FKL +(d) GAIL +Figure 8: Gridworld: The agent needs to move from the green circle to the red circle. The state visitations of the final policies are shown when using our framework for training (fkl, rkl) compared with AIM and GAIL trained on top of soft Q learning. + +![](images/6142918e1d80c39c033f002c4a1592080de021bb599996bf0d1a915dd90feeaa.jpg) +(b) RKL +(e) AIRL + +![](images/3a0172d2e57c71918ccf9ead1ee762397727190e0f40bba3922ab370eece252b.jpg) +(c) AIM +(f) FAIRL + +State: The state of the original environment contains several things but here we simplify the state space to simply be the position of the free end and the target or the goal position. + +Actions: The actions are two dimensional real numbers in $[-1, 1]$ which correspond to the torques applied on the two joint respectively. + +Reward: The reward is sparse i.e., 1 when the goal is reached by the tip of the arm. But $f$ -PG does not use rewards for training policies. + +# D.2 Comparing different $f$ -PG + +Figure 10 shows the evolution of $-f^{\prime}\left(\frac{p_{\theta}(s)}{p_{g}(s)}\right)$ for the environment. The red regions correspond to signals having a lower value while the darker blue regions correspond to signals with high value. For $fkl$ , the scale of these signals generally varies from 10 to $-5$ while for $\chi^2$ , the scale varies from 600 to 50. Also, for the same objective, as the policy trains, these scales generally get smaller in magnitude. + +The following can be observed from these plots: + +1. In all the cases, the signals are maximum at the goal pulling the state-visitions towards the goal. +2. All of these also push for exploration. This is most pronounced in $fkl$ and $\chi^2$ . These provide significant push towards the unexplored regions which show their inclination towards entropy-maximization, confirming the theory (Lemma 5.1). + +# E PointMaze experiments + +PointMaze (Fu et al., 2020) are continuous state-space domains where the agent needs to navigate to the goal in the 2D maze. The agent and the goal are spawned at a random location in the maze for every episode. There are three levels based on the difficulty of the maze as shown in Figure 11. + +State: The state consists of the agent's 2D position and the velocity in the maze. The goal position is appended to the state. + +Action: The actions are 2D real numbers in the range $[-1, 1]$ correspond to the force applied to the agent in each of the two directions. + +Reward: Although $f$ -PG does not use rewards, the baselines use the task reward which is sparse (1 when the goal is reached and 0 everywhere else). + +![](images/3a59a58de37d6d6f469f360be1b4a0bae60b0bae1ce1857d6125d2790e882dbd.jpg) +Figure 11: Description of PointMaze environments: PointMaze-U (left), PointMaze-Medium (middle), PointMaze-Large(right). + +![](images/c9ea6328680673b8ca5e5a846e7ed6800c7e9da48bb65f270fe5b641870e0561.jpg) + +![](images/2416a3bc4b1429812c027d0a961831d8f4d90a67f531425d2545c805bfa0ea91.jpg) + +For the experiments in Section 6.2, the initial and goal states are sampled uniformly over all the "VALID" states i.e., states that can be reached by the agent. Such an initialization allows discriminator-based methods to fulfill their coverage assumptions. In Section 6.3, the initialization procedure is modified so that the initial state and the goal state are considerably far. This is done by restricting the sampling of the initial and goal states from disjoint (considerably far away) distributions as shown in Figure 5. + +![](images/74b3df634441c53ec55960e4cb7872c2274db755be5e0c2b934c057f244d5ebd.jpg) + +![](images/f4762df4382138536f637a62e26130734a761d9b76a94689924aa4b651b28bbd.jpg) + +![](images/e20a7cd9ba3c5ef1458d4012a3fb4f57843f0526eeab3338272cb37bc61d052a.jpg) + +![](images/0eebcc7ed8a740f661352f98afaa1c6c855b9ac05a9d8f4225b3951902c5bd55.jpg) + +![](images/4936362145d567c04489e61d644ec97f23a200c54f75376fe3c0e0137040377e.jpg) + +![](images/772e1bfa2d12392642cc7f8efe569f19536f8fced875f85c8d390f2419a88c7c.jpg) + +![](images/0a13ab17e2782417456c70757356bcfc950ab3abf3e15cc87426c5831798bcaa.jpg) + +![](images/c6dec45eeb87c69de79003b3af15b03e901b1b21e268d2887c10eb2c7e49e7f3.jpg) +(a) FKL + +![](images/79ce5f4100c6a924d9fc788dab757a54b4abfd1570ceed49733c3765a1b639a8.jpg) + +![](images/da66f89c5e281a25d7ce31038791a76ab9f997826c259518db0a9c7463ccd400.jpg) + +![](images/236ee0608314771462f67132acf2f0332706ed1a30f6e6f7e314b4149e879d90.jpg) + +![](images/6ca214b7f4e1504c2f9f4ba17914ed34e1482da605837744ac0d5ce2540b351d.jpg) + +![](images/6bfd44751aebac5d686eb2c2b82169860857d2fa9169c8fdb1f8e9adc3a7efab.jpg) + +![](images/2f9c45bc4c5d21453eab449dd0776c86c3a195862757ad491ef00b91c1f88fc8.jpg) + +![](images/582ca8e3b2cd7832b985c4be662be7a63633e5aee022f689ea56efd5567934fd.jpg) + +![](images/d76a79a8b908be2f96e533988c9cf628dffcbb62aeca72af0e48212106f17100.jpg) + +![](images/a7381eb8476ff9452ee79bc113907f4f498c4ee7a307ecc521411aedf1a5f91b.jpg) + +![](images/81e00a78cab481c7af5544804fd8a87649e1fd39b526f6fe42f2e877d666b740.jpg) +(b) RKL + +![](images/243e64bab2c283440b4c2c0e00506b29f5f13c9db3137fd0ca9af778bdebc95e.jpg) + +![](images/467742850d5e6b21ef931a945d46792223e61a738542867da86cf7974b6db0fc.jpg) + +![](images/48963379f8661f373bdf01c29be6288272c9cfb554d984561680f4d4eea2ab52.jpg) + +![](images/cc193a4e1240cb9dd937222ebdb82cb13e5a1eea147ab1903d10136a2feb95c3.jpg) + +![](images/9e866783a63d8f02ad87527d88c1e18e184165453f31ae85671017f3806e803f.jpg) + +![](images/bcfafa87c5d2d3f0930533e13d4396e25ce0dc4bd64d046399e92c3f8ff7b0fb.jpg) + +![](images/27b965f8557d98b6b88ad8e8ba38017b9be2646dbd4e3aa37cd70889b3928b3c.jpg) + +![](images/830928d7d0ee462596d4af957c73089326e8f9e0ee36e9693f24a7e35fb90603.jpg) + +![](images/ff5c3848fda31230b3424c94418e4522a4d7ad75f5853aed8c9c3448ce583c65.jpg) + +![](images/780508ba85b0ea646af86fc9f0a03a1e340330630d6d39c4c659f4e8f53d95da.jpg) +(c)JS + +![](images/08d1268596abde29b04c47fc15f68828239533743d85848106081e9e3546bae7.jpg) + +![](images/90edb6f8b2d8e9570108ec0f3fc3f42991d8e39754bf544594f5c30243e6d6a4.jpg) + +![](images/f59c1f735b0ea2f6829e753c9b8d832cb7ba5e3f93737cc478c3b86823a94fab.jpg) + +![](images/112cc7790532ecffb964adf04488d710d1d9b53017521423180540e762ccf216.jpg) + +![](images/2932600303d44ca5bb0b8ba915919c27e440013a9f22dcdae606a003c97e3b71.jpg) + +![](images/a34494807c1fd6652f0d1e4d2614289c5656b39d697acdbda3252ec4b4a8a670.jpg) + +![](images/cc4b69211433a622f29188b3a4b334a069f8775fa5b5bb58240f1ac09676a489.jpg) + +![](images/d77963471e5b15dda5a1b8f51d8c4a308f5328edd25d619c9736bf1d8e81ea07.jpg) +Figure 10: Evolution of $-f^{\prime}(\frac{p_{\theta}(s)}{p_g(s)})$ along with the corresponding state-visitations for different $f$ -divergences - $fkl$ , $rkl$ , $js$ and $\chi^2$ . The scales for the value of these signals are not shown but they vary as the policy converges. $f$ -PG provides dense signals for pushing towards exploration. + +![](images/0ffc863b6273fe5c54440dc804078a226da75c14b7ea3cdba8bac7d52fb5d3fe.jpg) + +![](images/ae076c3305596e860e804597cdf1675eceeb75f1ea6a8afaec90a097fff44ac8.jpg) +(d) $\chi^2$ + +![](images/959dbfdb49a763e44bb6042067e51583ca6c64bba4be2b097edc32e09112c441.jpg) + +![](images/fac9c4a46a3a6815fd4bff39faab36dce8c1c9cd13f41e4c6b59c53370e429e9.jpg) \ No newline at end of file diff --git a/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/images.zip b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ff46c81b58a2d513e1b0ee0307aa81ca0a0b3c41 --- /dev/null +++ b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:611a02903614b90485539f9bfc5011a485225d9a2e933e4b144c9d06a905b651 +size 1174255 diff --git a/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/layout.json b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cc86f21c4d4a72c42d73af0f45aa09c6e89541bd --- /dev/null +++ b/fpolicygradientsageneralframeworkforgoalconditionedrlusingfdivergences/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09acadd88a5830c3c78bd8b1f087e2e2f61127bfd69b4f51f9045352ec4e4cf1 +size 986017 diff --git a/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_content_list.json b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..47ed1f35714971e60164bbe7a4517bfe562ad847 --- /dev/null +++ b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7a6492c49b1158acaafb1adb87bb3327be9d842bb7274b9cdb98121611230c2 +size 200186 diff --git a/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_model.json b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7d188e93cf1936dd467aae0a9a003c685249c2aa --- /dev/null +++ b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94695964932a25379e08ee57fd2706ecfc8ed2e2dd52407f464244164d682883 +size 241322 diff --git a/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_origin.pdf b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4f879f1ee1e0c40f547fdcbcb5912762b50c218f --- /dev/null +++ b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/025dad01-183e-408f-b811-9bb744cef25c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7f27f37295af3ec1848694be0a66e7f996142dab20618be7685e989428b72b0 +size 3459036 diff --git a/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/full.md b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b193bda26ac7c34bd4019698524cd9df05d28290 --- /dev/null +++ b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/full.md @@ -0,0 +1,973 @@ +# iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models + +Tianyu Chen*† Kevin Bello‡§ Bryon Aragam‡ Pradeep Ravikumar§ + +$^{\dagger}$ Department of Statistics and Data Science, University of Texas at Austin + +$^{\ddagger}$ Booth School of Business, University of Chicago + +$^{\S}$ Machine Learning Department, Carnegie Mellon University + +# Abstract + +Structural causal models (SCMs) are widely used in various disciplines to represent causal relationships among variables in complex systems. Unfortunately, the underlying causal structure is often unknown, and estimating it from data remains a challenging task. In many situations, however, the end goal is to localize the changes (shifts) in the causal mechanisms between related datasets instead of learning the full causal structure of the individual datasets. Some applications include root cause analysis, analyzing gene regulatory network structure changes between healthy and cancerous individuals, or explaining distribution shifts. This paper focuses on identifying the causal mechanism shifts in two or more related datasets over the same set of variables—without estimating the entire DAG structure of each SCM. Prior work under this setting assumed linear models with Gaussian noises; instead, in this work we assume that each SCM belongs to the more general class of nonlinear additive noise models (ANMs). A key technical contribution of this work is to show that the Jacobian of the score function for the mixture distribution allows for the identification of shifts under general non-parametric functional mechanisms. Once the shifted variables are identified, we leverage recent work to estimate the structural differences, if any, for the shifted variables. Experiments on synthetic and real-world data are provided to showcase the applicability of this approach. Code implementing the proposed method is open-source and publicly available at https://github.com/kevinsbello/iSCAN. + +# 1 Introduction + +Structural causal models (SCMs) are powerful models for representing causal relationships among variables in a complex system [54, 58]. Every SCM has an underlying graphical structure that is generally assumed to be a directed acyclic graph (DAG). Identifying the DAG structure of an SCM is crucial since it enables reasoning about interventions [54]. Nonetheless, in most situations, scientists can only access observational or interventional data, or both, while the true underlying DAG structure remains unknown. As a result, in numerous disciplines such as computational biology [66, 30, 20], epidemiology [64], medicine [61, 62], and econometrics [34, 28, 18], it is critically important to develop methods that can estimate the entire underlying DAG structure based on available data. This task is commonly referred to as causal discovery or structure learning, for which a variety of algorithms have been proposed over the last decades. + +Throughout this work, we make the assumption of causal sufficiency (i.e., non-existence of unobserved confounders). Under this condition alone, identifying the underlying DAG structure is not possible in general, and remains worst-case NP-complete [14, 16]. Indeed, prominent methods such as PC [71] + +and GES [15] additionally require the arguably strong faithfulness assumption [77] to consistently estimate, in large samples, the Markov equivalent class of the underlying DAG. However, these methods are not consistent in high-dimensions unless one additionally assumes sparsity or small maximum-degree of the true DAG [36, 50, 78]. Consequently, the existence of hub nodes, which is a well-known feature in several networks [5, 6, 7], significantly complicates the DAG learning problem. + +In many situations, however, the end goal is to detect shifts (changes) in the causal mechanisms between two (or more) related SCMs rather than recovering the entire underlying DAG structure of each SCM. For example, examining the mechanism changes in the gene regulatory network structure between healthy individuals and those with cancer may provide insights into the genetic factors contributing to the specific cancer; within biological pathways, genes could regulate various target gene groups depending on the cellular environment or the presence of particular disease conditions [32, 60]. In these examples, while the individual networks could be dense, the number of mechanism shifts could be sparse [69, 74, 55]. Finally, in root cause analysis, the goal is to identify the sources that originated observed changes in a joint distribution; this is precisely the setting we study in this work, where we model the joint distributions via SCMs, as also done in [52, 33]. + +In more detail, we focus on the problem of identifying mechanism shifts given datasets from two or more environments (SCMs) over the same observables. We assume that each SCM belongs to the class of additive noise models (ANMs) [29], i.e., each variable is defined as a nonlinear function over a subset of the remaining variables plus a random noise (see Section 2 for formal definitions). Importantly, we do not make any structural assumptions (e.g., sparsity, small maximum-degree, or bounded tree-width) on the individual DAGs. Even though ANMs are well-known to be identifiable [29, 59], we aim to detect the local distribution changes without estimating the full structures individually. See Figure 1 for a toy example of what we aim to estimate. A similar setting to this problem was studied in [82, 23] albeit in the restrictive linear setting. Finally, it is worth noting that even with complete knowledge of the entire structure of each SCM, assessing changes in non-parametric functions across different groups or environments remains a very challenging problem [see for instance, 44]. + +Contributions. Motivated by recent developments on causal structure learning of ANMs [65], we propose a two-fold algorithm that (1) Identifies shifted variables (i.e., variables for which their causal mechanism has changed across the environments); and (2) If needed, for each shifted variable, estimates the structural changes among the SCMs. More concretely, we make the following set of contributions: + +- To identify shifted variables (Definition 3), we prove that the variance of the diagonal elements of the Hessian matrix associated with the log-density of the mixture distribution unveils information to detect distribution shifts in the leaves of the DAGs (see Theorem 1). Due to this result, our algorithm (Algorithm 1) iteratively chooses a particular leaf variable and determines whether or not such variable is shifted. Importantly, this detection step does not rely on any structural assumptions on the individual DAGs, and can consistently detect distribution shifts for non-parametric functionals under very mild conditions such as second-order differentiability. +- To identify structurally shifted edges (Definition 4), we propose a nonparametric local parents recovery method (Algorithm 2) based on a recent measure of conditional dependence [3]. In addition, based on recent results in [4], we provide a theoretical justification for the asymptotic consistency of Algorithm 2 in Theorem 2. Importantly, since structural changes can only occur on shifted nodes, this second step can be conducted much more efficiently when the sparse mechanism shift hypothesis [69] holds, which posits that only a small subset of the causal model's mechanisms change. +- We empirically demonstrate that our method can outperform existing methods such as DCI, which is tailored for linear models, as well as related methods for estimating unknown intervention targets such as UT-IGSP [72]. See Section 5 and Appendix C for more details. Moreover, in Section 5.2, we provide experiments on an ovarian cancer dataset, thus, showcasing the applicability of our method. + +![](images/59882e453574855e136fe998bf35be38bc05eab5ed37bd65ea84eb4c57d6ab6e.jpg) +(a) Underlying SCM $\mathcal{M}^*$ . + +$$ +\begin{array}{l} X _ {1} = f _ {1} ^ {*} (N _ {1}) \stackrel {\mathrm {d e f}} {=} N _ {1}, \\ X _ {2} = f _ {2} ^ {*} \left(X _ {3}, N _ {2}\right) \stackrel {\text {d e f}} {=} \tanh \left(X _ {3}\right) + N _ {2}, \\ X _ {3} = f _ {3} ^ {*} \left(X _ {1}, N _ {3}\right) \stackrel {\text {d e f}} {=} \operatorname {s i n c} \left(X _ {1}\right) + N _ {3}, \\ X _ {4} = f _ {4} ^ {*} \left(X _ {1}, N _ {4}\right) \stackrel {\text {d e f}} {=} X _ {1} ^ {3} - X _ {1} + N _ {4}, \\ X _ {5} = f _ {5} ^ {*} \left(X _ {3}, X _ {4}, N _ {5}\right) \stackrel {\text {d e f}} {=} X _ {4} \cdot \sin \left(X _ {3}\right) + N _ {5}. \\ \end{array} +$$ + +(b) Structural equations of the unknown SCM $\mathcal{M}^*$ . + +![](images/cb3c83e3b022e3f9b7f9380becb6958b8891eff26c0f95396df06ca6c74bd784.jpg) +(c) Env. $\mathcal{E}_1$ , originated by an unknown intervention on $X_{5}$ +(d) Env. $\mathcal{E}_2$ ,originated by unknown interventions on $X_{3}$ and $X_{5}$ + +![](images/fec3b4ef7730099c01e991bf8ddc974b3e7a876f3458b0ba45f13f4376ac4bca.jpg) +(e) Shifts in mechanisms between $\mathcal{E}_1$ and $\mathcal{E}_2$ . +Figure 1: Illustration of two different environments (see Definition 2) in (1c) and (1d), both originated from the underlying SCM in (1a) with structural equations given in (1b). Between the two environments, we observe a change in the causal mechanisms of variables $X_{3}$ and $X_{5}$ — the red nodes in (1e). Specifically, for $X_{5}$ , we observe that its functional dependence changed from $X_{4}$ in $\mathcal{E}_{1}$ to $X_{3}$ in $\mathcal{E}_{2}$ . For $X_{3}$ , its structural dependence has not changed between $\mathcal{E}_{1}$ and $\mathcal{E}_{2}$ , and only its functional changed from $\mathrm{sinc}(X_1)$ in $\mathcal{E}_1$ to the sigmoid function $\sigma (X_1)$ in $\mathcal{E}_2$ . Finally, in (1e), the red edges represent the structural changes in the mechanisms. The non-existence of an edge from $X_{1}$ to $X_{3}$ indicates that the structural relation between $X_{1}$ and $X_{3}$ is invariant. + +# 2 Preliminaries and Background + +In this section we introduce notation and formally define the problem setting. We use $[d]$ to denote the set of integers $\{1, \ldots, d\}$ . Let $G = ([d], E)$ be a DAG with node set $[d]$ and a set of directed edges $E \subset [d] \times [d]$ , where any $(i, j) \in E$ indicates and edge from $i$ to $j$ . Also let $X = (X_1, \ldots, X_d)$ denote a $d$ -dimensional vector of random variables. An SCM $\mathcal{M} = (X, f, \mathbb{P}_N)$ over $d$ variables is generally defined as a collection of $d$ structural equations of the form: + +$$ +X _ {j} = f _ {j} \left(\mathrm {P A} _ {j}, N _ {j}\right), \forall j \in [ d ], \tag {1} +$$ + +where $\mathrm{PA}_j\subseteq \{X_1,\ldots ,X_d\} \setminus \{X_j\}$ are the direct causes (or parents) of $X_{j}$ ; $f = \{f_j\}_{j = 1}^d$ is a set of functional mechanisms $f_{j}:\mathbb{R}^{|\mathrm{PA}_{j}| + 1}\to \mathbb{R}$ ; and $\mathbb{P}_N$ is a joint distribution over the noise variables $N_{j}$ , which we assume to be jointly independent. Moreover, the underlying graph $G$ of an SCM is constructed by drawing directed edges for each $X_{k}\in \mathrm{PA}_{j}$ to $X_{j}$ . We henceforth assume this graph to be acyclic, i.e., a DAG. Finally, every SCM $\mathcal{M}$ defines a unique distribution $\mathbb{P}_X$ over the variables $X$ [Proposition 6.3 in 58], which by the independence of the noise variables (a.k.a. the Markovian assumption), $\mathbb{P}_X$ admits the following factorization: + +$$ +\mathbb {P} (X) = \prod_ {j = 1} ^ {d} \mathbb {P} \left(X _ {j} \mid \mathrm {P A} _ {j}\right), \tag {2} +$$ + +where $\mathbb{P}(X_j\mid \mathrm{PA}_j)$ is referred as the causal mechanism of $X_{j}$ + +The model above is often too general due to problems of identifiability. In this work we will consider that the noises are additive. + +Definition 1 (Additive noise models (ANMs)). An additive noise model is an SCM $\mathcal{M} = (X, f, \mathbb{P}_N)$ as in (1), where each structural assignment has the form: + +$$ +X _ {j} = f _ {j} \left(\mathrm {P A} _ {j}\right) + N _ {j}, \forall j \in [ d ]. +$$ + +Depending on the assumptions on $f_{j}$ and $N_{j}$ , the underlying DAG of an ANM can be identifiable from observational data. E.g., when $f_{j}$ is linear and $N_{j}$ is Gaussian, in general one can only identify the Markov equivalence class (MEC) of the DAG, assuming faithfulness [54]. For linear models, an exception arises when assuming equal error variances [56, 78, 47], or non-Gaussian errors [70]. In addition, when $f_{j}$ is nonlinear on each component and three times differentiable then the DAG is also identifiable [59, 29]. Very recently, Rolland et al. [65] proved DAG identifiability when $f_{j}$ is nonlinear on each component and $N_{j}$ is Gaussian, using information from the score's Jacobian. + +# 2.1 Data from multiple environments + +Throughout this work we assume that we observe a collection of datasets, $\mathcal{D} = \{\pmb{X}^h\}_{h=1}^H$ , from $H$ (possibly different) environments. Each dataset $\pmb{X}^h = \{X^{h,i}\}_{i=1}^{m_h}$ from environment $h$ contains $m_h$ (possibly non-independent) samples from the joint distribution $\mathbb{P}_X^h$ , i.e., $\pmb{X}^h \in \mathbb{R}^{m_h \times d}$ . We consider that each environment originates from soft interventions4 [54] of an unknown underlying SCM $\mathcal{M}^*$ with DAG structure $G^*$ and joint distribution $\mathbb{P}^*(X) = \prod_{j=1}^d \mathbb{P}^*(X_j \mid \mathrm{PA}_j^*)$ . Here $\mathrm{PA}_j^*$ denotes the parents (direct causes) of $X_j$ in $G^*$ . Then, an environment arises from manipulations or shifts in the causal mechanisms of a subset of variables, transforming from $\mathbb{P}^*(X_j \mid \mathrm{PA}_j^*)$ to $\widetilde{\mathbb{P}}(X_j \mid \widetilde{\mathrm{PA}}_j)$ . Throughout, we will make the common modularity assumption of causal mechanisms [54, 38], which postulates that an intervention on a node $X_j$ only changes the mechanism $\mathbb{P}(X_j \mid \mathrm{PA}_j)$ , while all other mechanisms $\mathbb{P}(X_i \mid \mathrm{PA}_i)$ , for $i \neq j$ , remain unchanged. + +Definition 2 (Environment). An environment $\mathcal{E}_h = (X, f^h, \mathbb{P}_N^h)$ , with joint distribution $\mathbb{P}_X^h$ and density $p_x^h$ , independently results from an SCM $\mathcal{M}^*$ by intervening on an unknown subset $S^h \subseteq [d]$ of causal mechanisms, that is, we can factorize the joint distribution $\mathbb{P}^h(X)$ as follows: + +$$ +\mathbb {P} ^ {h} (X) = \prod_ {j \in [ d ]} \mathbb {P} ^ {h} \left(X _ {j} \mid \mathrm {P A} _ {j} ^ {h}\right) = \prod_ {j \in S ^ {h}} \widetilde {\mathbb {P}} ^ {h} \left(X _ {j} \mid \widetilde {\mathrm {P A}} _ {j} ^ {h}\right) \prod_ {j \notin S ^ {h}} \mathbb {P} ^ {*} \left(X _ {j} \mid \mathrm {P A} _ {j} ^ {*}\right), \tag {3} +$$ + +where $\widetilde{\mathrm{PA}}_j^h$ is a (possibly empty) subset of the underlying causal parents $\mathrm{PA}_j^*$ , i.e., $\widetilde{\mathrm{PA}}_j^h \subseteq \mathrm{PA}_j^*$ ; and, $\mathbb{P}^*(X_j \mid \mathrm{PA}_j^*)$ are the invariant mechanisms. + +Remark 1. In the literature [e.g., 55], it is common to find the assumption that in a soft intervention the direct causes remain invariant, i.e., $\widetilde{\mathrm{PA}}_j^h = \mathrm{PA}_j^*$ for all $j\in S^h,h\in [H]$ . In this work we consider a more general setting where none, some, or all of the direct causes of an intervened node are removed, i.e., $\widetilde{\mathrm{PA}}_j^h\subseteq \mathrm{PA}_j^*$ for all $j\in S^h,h\in [H]$ . + +We next define shifted nodes (variables). + +Definition 3 (Shifted node). Given $H$ environments $\{\mathcal{E}_h = (X, f^h, \mathbb{P}_N^h)\}_{h=1}^H$ originated from an ANM $\mathcal{M}^*$ , a node $j$ is called a shifted node if there exists $h, h' \in [H]$ such that: + +$$ +\mathbb {P} ^ {h} \left(X _ {j} \mid \mathrm {P A} _ {j} ^ {h}\right) \neq \mathbb {P} ^ {h ^ {\prime}} \left(X _ {j} \mid \mathrm {P A} _ {j} ^ {h ^ {\prime}}\right). +$$ + +To conclude this section, we formally define the problem setting. + +Problem setting. Given $H$ datasets $\{X^h\}_{h=1}^H$ , where $X^h \sim \mathbb{P}_X^h$ consists of $m_h$ (possibly non-independent) samples from the environment distribution $\mathbb{P}_X^h$ originated from an underlying ANM $\mathcal{M}^*$ , estimate the set of shifted nodes and structural differences. + +We note that [82, 23] have study the problem setting above for $H = 2$ , assuming linear functions $f_{j}^{h}$ , and Gaussian noises $N_{j}^{h}$ . In this work, we consider a more challenging setting where $f_{j}^{h}$ are nonparametric functions (see Section 3 for more details). + +# 2.2 Related Work + +First we mention works most closely related to ours. The problem of learning the difference between undirected graphs has received much more attention than the directed case. E.g., [88, 46, 85, 19] develop algorithms for estimating the difference between Markov random fields and Ising models. See [87] for recent developments in this direction. In the directed setting, [82, 23] propose methods for directly estimating the difference of linear ANMs with Gaussian noise. More recently, [67] studied the setting where a dataset is generated from a mixture of SCMs, and their method is capable of detecting conditional distributions changes; however, due to the unknown membership of each sample, it is difficult to test for structural and functional changes. Moreover, in contrast to ours, all the aforementioned work on the directed setting rely on some form of faithfulness assumption. + +Causal discovery from a single environment. One way to identify mechanism shifts (albeit inefficient) would be to estimate the individual DAGs for each environment and then test for structural differences across the different environments. A few classical and recent methods for learning DAGs from a single dataset include: Constraint-based algorithms such as PC and FCI [71]; in score-based methods, we have greedy approaches such as GES [16], likelihood-based methods [56, 47, 59, 2, 1, 29], and continuous-constrained learning [89, 51, 39, 8]. Order-based methods [75, 41, 24, 65, 48], methods that test for asymmetries [70, 12], and hybrid methods [50, 76]. Finally, note that even if we perfectly estimate each individual DAG (assuming identifiable models such as ANMs), applying these methods would only identify structural changes. That is, for variables that have the same parents across all the environments, we would require an additional step to identify distributional changes. + +Testing functional changes in multiple datasets. Given the parents of a variable $X_{j}$ , one could leverage prior work [44, 25, 9, 26] on detecting heterogeneous functional relationships. However, we highlight some important limitations. Several methods such as [25, 9, 26] only work for one dimensional functionals and assume that the datasets share the exact same design matrix. Although [44] relaxes this assumption and extends the method to multivariate cases, the authors assume that the covariates (i.e., $\mathrm{PA}_j^h$ ) are sampled from the same distribution across the environments, which is a strong assumption in our context since ancestors of $X_{j}$ could have experienced mechanism shifts. Finally, methods such as [53] and [11], although nonparametric, need knowledge about the parent set $\mathrm{PA}_j$ for each variable, and they assume that $\mathrm{PA}_j$ is same across different environments. + +Causal discovery from heterogeneous data. Another well-studied problem is to learn the underlying DAG of the SCM $\mathcal{M}^*$ that originated the different environments. Under this setting, [83] provided a characterization of the $\mathcal{I}$ -MEC, a subset of the Markov equivalence class. [55] provided DAG-identifiability results by leveraging sparse mechanism shifts and relies on identifying such shifts, which this work aims to solve. [10] developed an estimator considering unknown intervention targets. [79] primarily focuses on linear SEM and does not adapt well to nonlinear scenarios. Also assuming linear models, [22, 21] applied ideas from linear invariant causal prediction [ICP, 57] and ICM to identify the causal DAG. [72] proposes a nonparametric method that can identify the intervention targets; however, this method relies on nonparametric CI tests, which can be time-consuming and sample inefficient. [49] introduced the joint causal inference (JCI) framework, which can also estimate intervention nodes. However, this method relies on an assumption that the intervention variables are fully connected, a condition that is unlikely to hold in practice. [31] introduced a two-stage approach that removes functional restrictions. First, they used the PC algorithm using all available data to identify the MEC. Then, the second step aims to orient the remaining edges based on a novel measure of mechanism dependence. Finally, we note that a common assumption in the aforementioned methods is the knowledge of which dataset corresponds to the observational distribution; without such information, their assumptions on the type of interventions would not hold true. In contrast, our method does not require knowledge of the observational distribution. + +# 3 Identifying Causal Mechanism Shifts via Score Matching + +In this section, we propose iSCAN (identifying Shifts in Causal Additive Noise models), a method for detecting shifted nodes (Definition 3) based only on information from the Jacobian of the score of the data distribution5. + +Let $\mathbf{X}$ be the row concatenation of all the datasets $\mathbf{X}^h$ , i.e., $\mathbf{X} = [(\mathbf{X}^1)^\top | \dots | (\mathbf{X}^H)^\top ]^\top \in \mathbb{R}^{m\times d}$ , where $m = \sum_{h=1}^{H} m_h$ . The pooled data $\mathbf{X}$ can be interpreted as a mixture of data from the $H$ different environments. To account for this mixture, we introduce the probability mass $w_h$ , which represents the probability that an observation belongs to environment $h$ , i.e., $\sum_{h=1}^{H} w_h = 1$ . Let $\mathbb{Q}(X)$ denote the distribution of the mixture data with density function $q(x)$ , i.e., $q(x) = \sum_{h=1}^{H} w_h p^h(x)$ . + +In the sequel, we use $s^h(x) \equiv \nabla \log p^h(x)$ to denote the score function of the joint distribution of environment $h$ with density $p^h(x)$ . Also, we let $s(x) \equiv \nabla \log q(x)$ to denote the score function of the mixture distribution with density $q(x)$ . We will make the following assumptions on $f_j^h$ and $N_j^h$ . + +Assumption A. For all $h \in [H], j \in [d]$ , the functional mechanisms $f_{j}^{h}(\mathrm{PA}_{j}^{h})$ are assumed to be non-linear in every component. + +Assumption B. For all $j \in [d]$ , $h \in [H]$ , the pdf of the real-valued noise $N_{j}^{h}$ denoted by $p_{N_j}^h$ satisfies $\frac{\partial^2}{(\partial n_j^h)^2} \log p_{N_j}^h (n_j^h) = c_j^h$ where $c_{j}^{h}$ is a non-zero constant. Moreover, $\mathbb{E}[N_j^h ] = 0$ . + +For an ANM, Rolland et al. [65] showed that under Assumption A and assuming zero-mean Gaussian noises (which satisfies Assumption B), the diagonal of the Jacobian of the score function reveals the leaves of the underlying DAG. We next instantiate their result in our context. + +Proposition 1 (Lemma 1 in [65, 68]). For an environment $\mathcal{E}_h$ with underlying DAG $G^h$ and pdf $p^h(x)$ , let $s^h(x) = \nabla \log p^h(x)$ be the associated score function. Then, under Assumptions A and B, for all $j \in [d]$ , we have: + +$$ +N o d e j i s a l e a f i n G ^ {h} \iff \operatorname {V a r} _ {X} \left[ \frac {\partial s _ {j} ^ {h} (X)}{\partial x _ {j}} \right] = 0. +$$ + +Motivated by the ideas of leaf-identifiability from the score's Jacobian in a single ANM, we next show that the score's Jacobian of the mixture distribution can help reveal mechanism shifts among the different environments. + +Theorem 1. For all $h \in [H]$ , let $G^h$ and $p^h(x)$ denote the underlying DAG structure and pdf of environment $\mathcal{E}_h$ , respectively, and let $q(x)$ be the pdf of the mixture distribution of the $H$ environments such that $q(x) = \sum_{h=1}^{H} w_h p^h(x)$ . Also, let $s(x) = \nabla \log q(x)$ be the associated score function. Then, under Assumptions $A$ and $B$ , we have: + +(i) If $j$ is a leaf in all DAGs $G^h$ , then $j$ is a shifted node if and only if $\operatorname{Var}_X\left[\frac{\partial s_j(X)}{\partial x_j}\right] > 0$ . +(ii) If $j$ is not a leaf in at least one DAG $G^{h}$ , then $\operatorname{Var}_X\left[\frac{\partial s_j(X)}{\partial x_j}\right] > 0$ . + +Theorem 1 along with Proposition 1 suggests a way to identify shifted nodes. Namely, to use Proposition 1 to identify a common leaf, and then use Theorem 1 to test if such a leaf is a shifted node or not. We then proceed to remove the leaf and repeat the process. See Algorithm 1. Note that due to the fact that each environment is a result of an intervention (Definition 2) on an underlying ANM $\mathcal{M}^*$ , it follows that the leaves in $G^{*}$ will remain leaves in each DAG $G^{th}$ . + +Algorithm 1 iSCAN—Identifying Shifts in Causal Additive Noise models. +Input: Datasets $X^1, \ldots, X^H$ . +Output: Shifted variables set $\widehat{S}$ , and topological sort $\hat{\pi}$ . +1: Initialize $\widehat{S} = \{\}$ , $\hat{\pi} = (\cdot), \mathcal{N} = \{1, \ldots, d\}$ +2: Set $\mathbf{X} = [(\mathbf{X}^1)^\top | \cdots | (\mathbf{X}^H)^\top]^\top \in \mathbb{R}^{m \times d}$ . +3: while $\mathcal{N} \neq \emptyset$ do +4: $\forall h \in [H]$ , $\operatorname{Var}^h \gets \operatorname{Var}_{\mathbf{X}^h}\left[\operatorname{diag}(\nabla^2 \log p^h(x))\right]$ . +5: $\operatorname{Var} \gets \operatorname{Var}_X\left[\operatorname{diag}(\nabla^2 \log q(x))\right]$ +6: $L \gets \bigcap_{h \in [H]} \left\{j \mid \operatorname{Var}_j^h = 0, j \in [d]\right\}$ . +7: $\widehat{S} \gets \widehat{S} \cup \{j \mid \operatorname{Var}_j \neq 0, j \in L\}$ +8: $\mathcal{N} \gets \mathcal{N} - \{L\}$ +9: $\forall l \in L$ , remove the $l$ -th column of $\mathbf{X}^h$ , $\forall h \in [H]$ , and $\mathbf{X}$ . +10: $\hat{\pi} \gets (L, \hat{\pi})$ . + +Remark 2. See Appendix A for a practical implementation of Alg. 1. Finally, note that Alg. 1 also estimates a valid topological sort for the different environments by leveraging Proposition 1. + +# 3.1 Score's Jacobian estimation + +Since the procedure to estimate $\mathrm{Var}_q[\frac{\partial s_j(x)}{\partial x_j}]$ is similar for estimating $\mathrm{Var}_{p^h}[\frac{\partial s_j^h(x)}{\partial x_j}]$ for each $h\in [H]$ , in this section we discuss the estimation for $\mathrm{Var}_q[\frac{\partial s_j(x)}{\partial x_j}]$ , which involves computing the diagonal of the Hessian of $\log q(x)$ . To estimate this quantity, we adopt a similar approach to the method in [45, 65]. First, we estimate the first-order derivative of $\log q(x)$ by Stein's identity [73]: + +$$ +\mathbb {E} _ {q} \left[ \boldsymbol {h} (x) \nabla \log q (x) ^ {\top} + \nabla \boldsymbol {h} (x) \right] = 0, \tag {4} +$$ + +where $\pmb{h}:\mathbb{R}^d\to \mathbb{R}^{d'}$ is any test function such that $\lim_{x\to \infty}\pmb {h}(x)q(x) = 0$ . Once we have estimated $\nabla \log q(x)$ , we can proceed to estimate the Hessian's diagonal by using second-order Stein's identity: + +$$ +\mathbb {E} _ {q} [ \boldsymbol {h} (x) \operatorname {d i a g} (\nabla^ {2} \log q (x)) ^ {\top} ] = \mathbb {E} _ {q} \left[ \nabla_ {\operatorname {d i a g}} ^ {2} \boldsymbol {h} (x) - \boldsymbol {h} (x) \operatorname {d i a g} (\nabla \log q (x) \nabla \log q (x) ^ {\top}) \right] \tag {5} +$$ + +Using eq.(4) and eq.(5), we can estimate the Hessian's diagonal at each data point. Thus allowing us to obtain an estimate of $\mathrm{Var}_q\left[\frac{\partial s_j(x)}{\partial x_j}\right]$ . See Appendix A.1 for additional details. + +Remark 3 (Consistency of Algorithm 1). The estimators in eq.(6) and eq.(7), given in Appendix A.1, correspond to Monte Carlo estimators using eq.(4) and (5), respectively, then the error of the estimators tend to zero as the number of samples goes to infinity. See for instance the discussion in Section 3.1 in [45]. We empirically explore the consistency of Algorithm 1 in Figure 2. + +Remark 4 (Computational Complexity). Since we adopt the kernel-based estimator, SCORE, from [65]. The computational complexity for the estimation of the score's Jacobian in a single environment is $\mathcal{O}(dm_h^3)$ . In Algorithm 1, computation is dominated by the SCORE function applied to the pooled data $\mathbf{X} \in \mathbb{R}^{m \times d}$ . Therefore, the overall complexity of Algorithm 1 is $\mathcal{O}(dm^3)$ . See Figure 2. + +Figure 2: (Left) F1 score of the output of Alg. 1 w.r.t. to the true set of shifted nodes. For different number of nodes, we observe how iSCAN recovers the true set of shifted nodes as the number of samples increases, thus empirically showing its consistency. (Right) Runtime vs number of nodes for different number of samples. We corroborate the linear dependence of the time complexity on $d$ . +![](images/417c523244afc2f8442a8b9f83d30fdb1b1cbf8397bde46e500d82b20670358a.jpg) +d 10 20 40 m 50 200 500 1000 + +![](images/6f186887c98b637c2145f82a878aeb7c5b575dd74bac6475fc37c24df5556df3.jpg) + +# 4 On Identifying Structural Differences + +After estimating the set of shifted nodes $\widehat{S}$ through Algorithm 1, it is of high interest to predict which causal relations between a shifted node and its parents have undergone changes across the environments. The meaning of a change in a causal relationship can vary based on the context and the estimation objective. This section primarily centers on structural changes, elaborated further below, while additional discussion about other types of changes is available in Appendix D. + +Definition 4 (Structurally shifted edge). For a given shifted node $X_{j}$ , an edge $X_{i} \to X_{j}$ is called a structurally shifted edge if $\exists h, h' \in [H]$ such that $X_{i} \in \mathrm{PA}_{j}^{h}$ and $X_{i} \notin \mathrm{PA}_{j}^{h'}$ . + +In other words, a structurally shifted edge is an edge that exists in one environment but not in another, indicating a change in the underlying structure of the causal mechanism. To detect the structurally shifted edges, we will estimate the parents of each shifted node in $\widehat{S}$ for all environments $\mathcal{E}_h$ . + +Remark 5. Note that under the sparse mechanism shift hypothesis [69], i.e., $|S| \ll d$ , estimating the parents of each shifted node is much more efficient than estimating the entire individual structures. + +Kernel regression and variable selection. A potential strategy to estimate structurally shifted edges involves employing the estimated topological order $\hat{\pi}$ obtained from Algorithm 1. If this estimated topological order remains valid across all environments, it can serve as a guide for the nonparametric variable selection process to identify the parents of a shifted node $X_{j}$ . Specifically, we can regress the shifted node $X_{j}$ on its predecessors $\widehat{\mathrm{Pre}}(X_{j})$ and proceed with a nonparametric variable selection procedure. Here $\widehat{\mathrm{Pre}}(X_{j})$ consists of the set of nodes that appear before $X_{j}$ in the estimated topological order $\hat{\pi}$ . To achieve that, there exist various methods under the hypothesis testing framework [42, 17, 63], and bandwidth selection procedures [40]. These methods offer consistency guarantees, but their time complexity might be problematic. Kernel regression, for example, has a time complexity of $\mathcal{O}(m^3)$ , and requires an additional bandwidth selection procedure, usually with a time complexity of $\mathcal{O}(m^2)$ . Consequently, it becomes imperative to find a more efficient method for identifying parents locally. + +Feature ordering by conditional independence (FOCI). An alternative efficient approach for identifying the parents is to leverage the feature ordering method based on conditional independence proposed by Azadkia and Chatterjee [3]. This method provides a measure of conditional dependency between variables with a time complexity of $\mathcal{O}(m\log m)$ . By applying this method, we can perform fast variable selection in a nonparametric setting. See Algorithm 4 in Appendix A.3. + +Theorem 2 (Consistency of Algorithm 4). Under Assumption $C$ , given in Appendix B.2, if the estimated topological order $\hat{\pi}$ output from Algorithm 1 is valid for all environments, then the output $\widehat{\mathrm{PA}}_j^h$ of Algorithm 4 is equal to the true parents $\mathrm{PA}_j^h$ of node $X_{j}$ with high probability, for all $h\in [H]$ . + +Motivated by Theorem 2, we next present Algorithm 2, a procedure to estimate the structurally shifted edges. Given the consistency of Alg. 1 and Alg. 2, it follows that combining both algorithms will correctly estimate the true set of shifted nodes and structural shifted edges, asymptotically. + +Algorithm 2 Identifying structurally shifted edges +Input: Data $\{X^h\}_{h\in [H]}$ , topological order $\hat{\pi}$ , shifted nodes $\widehat{S}$ +Output: Structurally shifted edges set $\widehat{E}$ +1: Initialize $\widehat{E} = \emptyset$ +2: for $X_{j}$ in $\widehat{S}$ do +3: for $h$ in $[H]$ do +4: Estimate $\widehat{\mathrm{PA}}_j^h$ from Alg. 4 (FOCI) with input $\{\widehat{\mathrm{Pre}} (\mathbf{X}_j^h),\mathbf{X}_j^h\}$ +5: if $\exists X_{k},h,h^{\prime}$ such that $X_{k}\in \widehat{\mathrm{PA}}_{j}^{h},X_{k}\notin \widehat{\mathrm{PA}}_{j}^{h^{\prime}}$ then +6: $\widehat{E}\gets \widehat{E}\cup (X_k,X_j)$ + +# 5 Experiments + +We conducted a comprehensive evaluation of our algorithms. Section 5.1 focuses on assessing the performance of iSCAN (Alg. 1) for identifying shifted variables. In Section 5.2, we apply iSCAN for identifying shifted nodes along with FOCI (Alg. 2) for estimating structural changes, on apoptosis data. Also, in App. C, we provide additional experiments including: (i) Localizing shifted nodes without structural changes (App. C.1), and where the functionals are sampled from Gaussian processes (App. C.1.1); (ii) Localizing shifted nodes and estimating structural changes when the underlying graphs are different; and (iii) Evaluating iSCAN using the elbow method for selecting shifted nodes (see App. C.3 and Remark 6). Code is publicly available at https://github.com/kevinsbello/iSCAN. + +# 5.1 Synthetic experiments on shifted nodes + +Graph models. We generated random graphs using the Erdős-Rényi (ER) and scale free (SF) models. For a given number of variables $d$ , ER $k$ and SF $k$ indicate an average number of edges equal to $kd$ . + +Data generation. We first sampled a DAG, $G^{1}$ , of $d$ nodes according to either the ER or SF model for env. $\mathcal{E}_1$ . For env. $\mathcal{E}_2$ , we initialized its DAG structure from env. $\mathcal{E}_1$ and produced structural changes by randomly selecting $0.2 \cdot d$ nodes from the non-root nodes. This set of selected nodes $S$ with cardinality $|S| = 0.2d$ , correspond to the set of "shifted nodes". In env. $\mathcal{E}_2$ , for each shifted node $X_{j} \in S$ , we uniformly at random deleted at most three of its incoming edges, and use $D_{j}$ to denote the parents whose edges to $X_{j}$ were deleted; thus, the DAG $G^{2}$ is a subgraph of $G^{1}$ . Then, in $\mathcal{E}_1$ , each $X_{j}$ was defined as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j} ^ {1} \backslash D _ {j}} \sin (X _ {i} ^ {2}) + \sum_ {i \in D _ {j}} 4 \cos (2 X _ {i} ^ {2} - 3 X _ {i}) + N _ {j} +$$ + +In $\mathcal{E}_2$ , each $X_{j}$ was defined as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j} ^ {2}} \sin (X _ {i} ^ {2}) + N _ {j} +$$ + +Experiment details. For each simulation, we generated 500 data points per environment, i.e., $m_{1} = 500$ , $m_{2} = 500$ and $m = 1000$ . The noise variances were set to 1. We conducted 30 simulations for each combination of graph type (ER or SF), noise type (Gaussian, Gumbel, and Laplace), and number of nodes ( $d \in \{10, 20, 30, 50\}$ ). The running time was recorded by executing the experiments on an Intel Xeon Gold 6248R Processor with 8 cores. For our method, we used $\eta = 0.05$ for eq.(6) and eq.(7), and a threshold $t = 2$ (see Alg. 3). + +Evaluation. We compared the performance of iSCAN against several baselines, which include: DCI [82], the approach by [11], CITE [79], KCD [53], SCORE [65], and UT-IGSP [72]. Figure 3 illustrates the results for ER4 and SF4 graphs. We note that iSCAN consistently outperforms other baselines in terms of F1 score across all scenarios. Importantly, note how the performance of some baselines, like DCI, CITE, Budhathoki's, and SCORE, degrades faster for graphs with hub nodes, a property of SF graphs. In contrast, iSCAN performs similarly, as it is not dependent on structural assumptions on the individual DAGs. Additionally, it is worth noting that our method exhibits faster computational time than KCD, Budhathoki's, and SCORE, particularly for larger numbers of nodes. + +In Appendix C.1, we provide experiments on sparser graphs such as ER2/SF2, and denser graphs such as ER6/SF6. We also include Precision and Recall in all plots in the supplement. + +![](images/2cc1bb0526198fe85989ec13bc8ee58b2c1485d9e55a64d16e317ed2483494ea.jpg) +Figure 3: Experiments on ER4 and SF4 graphs. See the experiment details above. The points indicate the average values obtained from these simulations. The error bars depict the standard errors. Our method iSCAN (light blue) consistently outperformed baseline methods in terms of F1 score. + +# 5.2 Experiments on apoptosis data + +We conducted an analysis on an ovarian cancer dataset using iSCAN (Algorithm 1) to identify shifted nodes and Algorithm 2 to detect structurally shifted edges (SSEs). This dataset had previously been analyzed using the DPM method [88] in the undirected setting, and the DCI method [82] in the linear setting. By applying our method, we were able to identify the shifted nodes and SSEs in the dataset (see Figure 4a). Our analysis revealed the identification of two hub nodes in the apoptosis + +pathway: BIRC3, and PRKAR2B. The identification of BIRC3 as a hub node was consistent with the results obtained by the DPM and DCI methods. Additionally, our analysis also identified PRKAR2B as a hub node, which was consistent with the result obtained by the DCI method. Indeed, BIRC3, in addition to its role in inhibiting TRAIL-induced apoptosis, has been investigated as a potential therapeutic target in cancer treatment including ovarian cancer[35, 81]; whereas PRKAR2B has been identified as an important factor in the progression of ovarian cancer cells. The latter serves as a key regulatory unit involved in the growth and development of cancer cells [84, 13]. + +Figure 4: Results on apoptosis data. +![](images/702350f7ce675c648dc85fac1cc63a080899c48597e166ddb5cb5741e256891e.jpg) +(a) The red nodes are the shifted nodes estimated by iSCAN (Alg. 1). The edges are the structurally shifted edges estimated by FOCI (Alg. 2). + +![](images/04341535408736d06a9ffa96af0e53331827e9d4133a958741b2f9f7b23e63bf.jpg) +(b) Undirected difference network estimated by DPM [88]. The red nodes indicate hub nodes, however, it is not clear which node mechanisms have changed. + +# 6 Conclusion + +In this work, we showed a novel connection between score matching and identifying causal mechanism shifts among related heterogeneous datasets. This finding opens up a new and promising application for score function estimation techniques. + +Our proposed technique consists of three modules. The first module evaluates the Jacobian of the score under the individual distributions and the mixture distribution. The second module identifies shifted features (variables) using the estimated Jacobians, allowing us to pinpoint the nodes that have undergone a mechanism shift. Finally, the third module aims to estimate structurally shifted edges, a.k.a. the difference DAG, by leveraging the information from the identified shifted nodes and the estimated topological order. It is important to note that our identifiability result in Theorem 1 is agnostic to the choice of the score estimator. + +The strength of our result lies in its capability to recover the difference DAG in non-linear Additive Noise Models (ANMs) without making any assumptions about the parametric form of the functions or statistical independencies. This makes our method applicable in a wide range of scenarios where non-linear relationships and shifts in mechanisms are present. + +# 6.1 Limitations and future work + +While our work demonstrates the applicability of score matching in identifying causal mechanism shifts in the context of nonlinear ANMs, there are several limitations and areas for future exploration: + +Extension to other families of SCMs: Currently, our method is primarily focused on ANMs where the noise distribution satisfies Assumption B, e.g., Gaussian distributions. It would be valuable to investigate the application of score matching in identifying causal mechanism shifts in other types of SCMs. Recent literature, such as [48], has extended score matching to additive Models with arbitrary noise for finding the topological order. Expanding our method to accommodate different noise models would enhance its applicability to a wider range of real-world scenarios. + +Convergence rate analysis: Although the score matching estimator is asymptotically consistent, the convergence rate remains unknown in general. Understanding the convergence properties of the estimator is crucial for determining the sample efficiency and estimating the required number of samples to control the estimation error within a desired threshold. Further theoretical developments, such as [37], on score matching estimators would provide valuable insights into the performance and sample requirements of iSCAN. + +# Acknowledgments and Disclosure of Funding + +K.B. was supported by NSF under Grant # 2127309 to the Computing Research Association for the CIFellows 2021 Project. B.A. was supported by NSF IIS-1956330, NIH R01GM140467, and the Robert H. Topel Faculty Research Fund at the University of Chicago Booth School of Business. P.R. was supported by ONR via N000141812861, and NSF via IIS-1909816, IIS-1955532, IIS-2211907. We are also grateful for the support of the University of Chicago Research Computing Center for assistance with the calculations carried out in this work. + +# References + +[1] Aragam, B., Amini, A. and Zhou, Q. [2019], 'Globally optimal score-based learning of directed acyclic graphs in high-dimensions', Advances in Neural Information Processing Systems 32. +[2] Aragam, B. and Zhou, Q. [2015], 'Concave penalized estimation of sparse Gaussian Bayesian networks', The Journal of Machine Learning Research 16(1), 2273-2328. +[3] Azadkia, M. and Chatterjee, S. [2021], 'A simple measure of conditional dependence', The Annals of Statistics 49(6), 3070-3102. +[4] Azadkia, M., Taeb, A. and Buhlmann, P. [2021], 'A fast non-parametric approach for local causal structure learning', arXiv e-prints pp. arXiv-2111. +[5] Barabási, A.-L. and Albert, R. [1999], 'Emergence of scaling in random networks', science 286(5439), 509-512. +[6] Barabási, A.-L., Gulbahce, N. and Loscalzo, J. [2011], 'Network medicine: a network-based approach to human disease', Nature reviews genetics 12(1), 56-68. +[7] Barabasi, A.-L. and Oltvai, Z. N. [2004], 'Network biology: understanding the cell's functional organization', Nature reviews genetics 5(2), 101-113. +[8] Bello, K., Aragam, B. and Ravikumar, P. [2022], 'Dagma: Learning dags via m-matrices and a log-determinant acyclicity characterization', arXiv preprint arXiv:2209.08037. +[9] Boente, G. and Pardo-Fernández, J. C. [2022], 'Robust testing to compare regression curves', arXiv preprint arXiv:2205.12065. +[10] Brouillard, P., Lachapelle, S., Lacoste, A., Lacoste-Julien, S. and Drouin, A. [2020], 'Differentiable causal discovery from interventional data', Advances in Neural Information Processing Systems 33, 21865-21877. +[11] Budhathoki, K., Janzing, D., Bloebaum, P. and Ng, H. [2021], Why did the distribution change?, in 'International Conference on Artificial Intelligence and Statistics', PMLR, pp. 1666-1674. +[12] Buhlmann, P., Peters, J. and Ernest, J. [2014], 'Cam: Causal additive models, high-dimensional order search and penalized regression', The Annals of Statistics 42(6), 2526-2556. +[13] Chiaradonna, F., Balestrieri, C., Gaglio, D. and Vanoni, M. [2008], 'Ras and pka pathways in cancer: new insight from transcriptional analysis', Frontiers in Bioscience-Landmark 13(14), 5257-5278. +[14] Chickering, D. M. [1996], Learning bayesian networks is np-complete, in 'Learning from data', Springer, pp. 121-130. +[15] Chickering, D. M. [2003], 'Optimal structure identification with greedy search', JMLR 3, 507-554. +[16] Chickering, D. M., Heckerman, D. and Meek, C. [2004], 'Large-sample learning of Bayesian networks is NP-hard', Journal of Machine Learning Research 5, 1287-1330. +[17] Delgado, M. A. and Manteiga, W. G. [2001], 'Significance testing in nonparametric regression based on the bootstrap', The Annals of Statistics 29(5), 1469-1507. +[18] Demiralp, S. and Hoover, K. D. [2003], 'Searching for the causal structure of a vector autoregression', Oxford Bulletin of Economics and statistics 65, 745-767. +[19] Fazayeli, F. and Banerjee, A. [2016], Generalized Direct Change Estimation in Ising Model Structure, in 'International Conference on Machine Learning'. + +[20] Friedman, N., Linial, M., Nachman, I. and Pe'er, D. [2000], Using bayesian networks to analyze expression data, in Proceedings of the fourth annual international conference on Computational molecular biology', pp. 127-135. +[21] Ghassami, A., Kiyavash, N., Huang, B. and Zhang, K. [2018], 'Multi-domain causal structure learning in linear systems', Advances in neural information processing systems 31. +[22] Ghassami, A., Salehkaleybar, S., Kiyavash, N. and Zhang, K. [2017], 'Learning causal structures using regression invariance', Advances in Neural Information Processing Systems 30. +[23] Ghoshal, A., Bello, K. and Honorio, J. [2019], 'Direct learning with guarantees of the difference dag between structural equation models', arXiv preprint arXiv:1906.12024. +[24] Ghoshal, A. and Honorio, J. [2018], Learning linear structural equation models in polynomial time and sample complexity, in 'Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics', Vol. 84 of Proceedings of Machine Learning Research, PMLR, pp. 1466–1475. +[25] Hall, P. and Hart, J. D. [1990], 'Bootstrap test for difference between means in nonparametric regression', Journal of the American Statistical Association 85(412), 1039-1049. +[26] Hardle, W. and Marron, J. S. [1990], 'Semiparametric comparison of regression curves', The Annals of Statistics pp. 63-89. +[27] Hastie, T. J. [2017], Generalized additive models, in 'Statistical models in S', Routledge, pp. 249-307. +[28] Hoover, K. D., Demiralp, S. and Perez, S. J. [2009], 'Empirical identification of the vector autoregression: The causes and effects of us m2', The methodology and practice of econometrics: a Festschrift in honour of David F. Hendry pp. 37-58. +[29] Hoyer, P., Janzing, D., Mooij, J. M., Peters, J. and Scholkopf, B. [2008], 'Nonlinear causal discovery with additive noise models', Advances in neural information processing systems 21. +[30] Hu, P., Jiao, R., Jin, L. and Xiong, M. [2018], 'Application of causal inference to genomic analysis: advances in methodology', Frontiers in Genetics 9, 238. +[31] Huang, B., Zhang, K., Zhang, J., Ramsey, J., Sanchez-Romero, R., Glymour, C. and Schölkopf, B. [2020], 'Causal discovery from heterogeneous/nonstationary data', The Journal of Machine Learning Research 21(1), 3482-3534. +[32] Hudson, N. J., Reverter, A. and Dalrymple, B. P. [2009], 'A differential wiring analysis of expression data correctly identifies the gene containing the causal mutation', PLoS computational biology 5(5), e1000382. +[33] Ikram, A., Chakraborty, S., Mitra, S., Saini, S., Bagchi, S. and Kocaoglu, M. [2022], 'Root cause analysis of failures in microservices through causal discovery', Advances in Neural Information Processing Systems 35, 31158-31170. +[34] Imbens, G. W. [2020], 'Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics', Journal of Economic Literature 58(4), 1129-1179. +[35] Johnstone, R. W., Frew, A. J. and Smyth, M. J. [2008], 'The trail apoptotic pathway in cancer onset, progression and therapy', Nature Reviews Cancer 8(10), 782-798. +[36] Kalisch, M. and Buhlman, P. [2007], 'Estimating high-dimensional directed acyclic graphs with the pc-algorithm', Journal of Machine Learning Research 8(3). +[37] Koehler, F., Heckett, A. and Risteski, A. [2022], 'Statistical efficiency of score matching: The view from isoperimetry', arXiv preprint arXiv:2210.00726. +[38] Koller, D. and Friedman, N. [2009], Probabilistic graphical models: principles and techniques, MIT press. +[39] Lachapelle, S., Brouillard, P., Deleu, T. and Lacoste-Julien, S. [2019], 'Gradient-based neural dag learning', arXiv preprint arXiv:1906.02226. +[40] Lafferty, J. and Wasserman, L. [2008], 'Rodeo: Sparse, greedy nonparametric regression'. +[41] Larranaga, P., Kuijpers, C. M., Murga, R. H. and Yurramendi, Y. [1996], 'Learning bayesian network structures by searching for the best ordering with genetic algorithms', IEEE transactions on systems, man, and cybernetics-part A: systems and humans 26(4), 487-493. + +[42] Lavernne, P. and Vuong, Q. [2000], 'Nonparametric significance testing', Econometric Theory 16(4), 576-601. +[43] Li, C., Shen, X. and Pan, W. [2023], 'Nonlinear causal discovery with confounders', Journal of the American Statistical Association pp. 1-10. +[44] Li, X., Jiang, B. and Liu, J. S. [2021], 'Kernel-based partial permutation test for detecting heterogeneous functional relationship', Journal of the American Statistical Association pp. 1-19. +[45] Li, Y. and Turner, R. E. [2017], 'Gradient estimators for implicit models', arXiv preprint arXiv:1705.07107. +[46] Liu, S., Suzuki, T., Relator, R., Sese, J., Sugiyama, M., Fukumizu, K. et al. [2017], 'Support consistency of direct sparse-change learning in markov networks', The Annals of Statistics. +[47] Loh, P.-L. and Buhlmann, P. [2014], 'High-Dimensional Learning of Linear Causal Networks via Inverse Covariance Estimation', Journal of Machine Learning Research. +[48] Montagna, F., Noceti, N., Rosasco, L., Zhang, K. and Locatello, F. [2023], 'Causal discovery with score matching on additive models with arbitrary noise', arXiv:2304.03265. +[49] Mooij, J. M., Magliacane, S. and Claassen, T. [2020], 'Joint causal inference from multiple contexts', The Journal of Machine Learning Research 21(1), 3919-4026. +[50] Nandy, P., Hauser, A. and Maathuis, M. H. [2018], 'High-dimensional consistency in score-based and hybrid structure learning', The Annals of Statistics 46(6A), 3151-3183. +[51] Ng, I., Ghassami, A. and Zhang, K. [2020], 'On the role of sparsity and dag constraints for learning linear dags', Advances in Neural Information Processing Systems 33, 17943-17954. +[52] Paleyes, A., Guo, S., Scholkopf, B. and Lawrence, N. D. [2023], Dataflow graphs as complete causal graphs, in '2023 IEEE/ACM 2nd International Conference on AI Engineering-Software Engineering for AI (CAIN)', IEEE, pp. 7-12. +[53] Park, J., Shalit, U., Scholkopf, B. and Muandet, K. [2021], Conditional distributional treatment effect with kernel conditional mean embeddings and u-statistic regression, in 'International Conference on Machine Learning', PMLR, pp. 8401-8412. +[54] Pearl, J. [2009], CAUSALITY: Models, Reasoning, and Inference, 2nd edn, Cambridge University Press. +[55] Perry, R., Von Kugelgen, J. and Scholkopf, B. [2022], 'Causal discovery in heterogeneous environments under the sparse mechanism shift hypothesis', arXiv preprint arXiv:2206.02013. +[56] Peters, J. and Buhlmann, P. [2014], 'Identifiability of gaussian structural equation models with equal error variances', Biometrika 101(1), 219-228. +[57] Peters, J., Buhlmann, P. and Meinshausen, N. [2016], 'Causal inference by using invariant prediction: identification and confidence intervals', Journal of the Royal Statistical Society. Series B (Statistical Methodology) pp. 947-1012. +[58] Peters, J., Janzing, D. and Schölkopf, B. [2017], Elements of causal inference: foundations and learning algorithms, The MIT Press. +[59] Peters, J., Mooij, J. M., Janzing, D. and Scholkopf, B. [2014], 'Causal discovery with continuous additive noise models'. +[60] Pimanda, J. E., Ottersbach, K., Knezevic, K., Kinston, S., Chan, W. Y., Wilson, N. K., Landry, J.-R., Wood, A. D., Kolb-Kokocinski, A., Green, A. R. et al. [2007], 'Gata2, fli1, and scl form a recursively wired gene-regulatory circuit during early hematopoietic development', Proceedings of the National Academy of Sciences 104(45), 17692-17697. +[61] Plis, S. M., Calhoun, V. D., Weisend, M. P., Eichele, T. and Lane, T. [2010], 'Meg and fmri fusion for non-linear estimation of neural and bold signal changes', Frontiers in neuroinformatics 4, 114. +[62] Plis, S. M., Weisend, M. P., Damaraju, E., Eichele, T., Mayer, A., Clark, V. P., Lane, T. and Calhoun, V. D. [2011], 'Effective connectivity analysis of fmri and meg data collected under identical paradigms', Computers in biology and medicine 41(12), 1156-1165. +[63] Racine, J. [1997], 'Consistent significance testing for nonparametric regression', Journal of Business & Economic Statistics 15(3), 369-378. + +[64] Robins, J. M., Hernan, M. A. and Brumback, B. [2000], 'Marginal structural models and causal inference in epidemiology', Epidemiology pp. 550-560. +[65] Rolland, P., Cevher, V., Kleindessner, M., Russell, C., Janzing, D., Schölkopf, B. and Locatello, F. [2022], Score matching enables causal discovery of nonlinear additive noise models, in 'International Conference on Machine Learning', PMLR, pp. 18741-18753. +[66] Sachs, K., Perez, O., Pe'er, D., Lauffenburger, D. A. and Nolan, G. P. [2005], 'Causal protein-signaling networks derived from multiparameter single-cell data', Science 308(5721), 523-529. +[67] Saeed, B., Panigrahi, S. and Uhler, C. [2020], Causal structure discovery from distributions arising from mixtures of dags, in 'International Conference on Machine Learning', PMLR, pp. 8336-8345. +[68] Sanchez, P., Liu, X., O'Neil, A. Q. and Tsaftaris, S. A. [2022], 'Diffusion models for causal discovery via topological ordering', arXiv preprint arXiv:2210.06201. +[69] Scholkopf, B., Locatello, F., Bauer, S., Ke, N. R., Kalchbrenner, N., Goyal, A. and Bengio, Y. [2021], 'Toward causal representation learning', Proceedings of the IEEE 109(5), 612-634. +[70] Shimizu, S., Hoyer, P. O., Hyvarinen, A., Kerminen, A. and Jordan, M. [2006], 'A linear non-gaussian acyclic model for causal discovery', Journal of Machine Learning Research. +[71] Spirtes, P., Glymour, C. N., Scheines, R. and Heckerman, D. [2000], Causation, prediction, and search, MIT press. +[72] Squires, C., Wang, Y. and Uhler, C. [2020], Permutation-based causal structure learning with unknown intervention targets, in 'Conference on Uncertainty in Artificial Intelligence', PMLR, pp. 1039-1048. +[73] Stein, C. [1972], A bound for the error in the normal approximation to the distribution of a sum of dependent random variables, in 'Proc. Sixth Berkeley Symp. Math. Stat. Prob.', pp. 583-602. +[74] Tanay, A., Regev, A. and Shamir, R. [2005], 'Conservation and evolvability in regulatory networks: the evolution of ribosomal regulation in yeast', Proceedings of the National Academy of Sciences 102(20), 7203-7208. +[75] Teyssier, M. and Koller, D. [2012], 'Ordering-based search: A simple and effective algorithm for learning bayesian networks', arXiv preprint arXiv:1207.1429. +[76] Tsamardinos, I., Brown, L. E. and Aliferis, C. F. [2006], 'The max-min hill-climbing Bayesian network structure learning algorithm', Machine Learning 65(1), 31-78. +[77] Uhler, C., Raskutti, G., Buhlmann, P. and Yu, B. [2013], 'Geometry of the faithfulness assumption in causal inference', The Annals of Statistics pp. 436-463. +[78] Van de Geer, S. and Buhlmann, P. [2013], ‘ $\ell_0$ -penalized maximum likelihood for sparse directed acyclic graphs’, The Annals of Statistics 41(2), 536–567. +[79] Varici, B., Shanmugam, K., Sattigeri, P. and Tajer, A. [2021], 'Scalable intervention target estimation in linear models', Advances in Neural Information Processing Systems 34, 1494-1505. +[80] Voorman, A., Shojaie, A. and Witten, D. [2014], 'Graph estimation with joint additive models', Biometrika 101(1), 85-101. +[81] Vucic, D. and Fairbrother, W. J. [2007], 'The inhibitor of apoptosis proteins as therapeutic targets in cancer', *Clinical cancer research* 13(20), 5995-6000. +[82] Wang, Y., Squires, C., Belyaeva, A. and Uhler, C. [2018], 'Direct estimation of differences in causal graphs', Advances in neural information processing systems 31. +[83] Yang, K., Katcoff, A. and Uhler, C. [2018], Characterizing and learning equivalence classes of causal dags under interventions, in 'International Conference on Machine Learning', PMLR, pp. 5541-5550. +[84] Yoon, H., Jang, H., Kim, E.-Y., Moon, S., Lee, S., Cho, M., Cho, H. J., Ko, J. J., Chang, E. M., Lee, K.-A. et al. [2018], 'Knockdown of prkar2b results in the failure of oocyte maturation', Cellular Physiology and Biochemistry 45(5), 2009–2020. +[85] Yuan, H., Xi, R., Chen, C. and Deng, M. [2017], 'Differential network analysis via lasso penalized D-trace loss', Biometrika. + +[86] Zhang, K., Peters, J., Janzing, D. and Scholkopf, B. [2012], 'Kernel-based conditional independence test and application in causal discovery', arXiv preprint arXiv:1202.3775. +[87] Zhao, B., Wang, Y. S. and Kolar, M. [2020], 'Fudge: Functional differential graph estimation with fully and discretely observed curves', arXiv preprint arXiv:2003.05402. +[88] Zhao, S. D., Cai, T. T. and Li, H. [2014], 'Direct estimation of differential networks', Biometrika 101(2), 253-268. +[89] Zheng, X., Aragam, B., Ravikumar, P. K. and Xing, E. P. [2018], 'Dags with no tears: Continuous optimization for structure learning', NeurIPS. + +# SUPPLEMENTARY MATERIAL + +# iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models + +# A Practical Implementation + +In this section, we present a more practical version of Alg. 1 that considers estimation errors, see Alg. 3. First, we provide more details of the score's Jacobian estimation. + +# A.1 Practical Version of SCORE + +Let $X = \{x^{1},\ldots ,x^{m}\}$ be a dataset of $m$ possibly non-independent but identically distributed samples. From Li and Turner [45], we next present the estimator for the point-wise first-order partial derivative, corresponding to eq.(4): + +$$ +\hat {\boldsymbol {G}} = - \left(\boldsymbol {K} + \eta \boldsymbol {I}\right) ^ {- 1} \langle \nabla , \boldsymbol {K} \rangle \tag {6} +$$ + +where $\pmb{H} = (h(x^{1}),\dots,h(x^{m}))\in \mathbb{R}^{d^{\prime}\times m},\overline{\nabla h} = \frac{1}{m}\sum_{k = 1}^{m}\nabla h(x^{k}),\pmb{K} = \pmb{H}^{\top}\pmb{H},K_{ij} = \kappa (x^{i},x^{j}) = \pmb {h}(x^{i})^{\top}\pmb {h}(x^{j}),\langle \nabla ,\pmb {K}\rangle = m\pmb{H}^{T}\overline{\nabla h},\langle \nabla ,\pmb{K}\rangle_{ij} = \sum_{k = 1}^{m}\nabla_{x_{j}^{k}}\kappa (x^{i},x^{j}),$ and $\eta \geq 0$ is a regularization parameter. Here $\hat{G}$ is used to approximate $G\equiv (\nabla \log p(x^{1}),\ldots ,\nabla \log p(x^{m}))^{\top}\in$ $\mathbb{R}^{m\times d}$ + +From [65], we now present the estimator for the diagonal elements of the score's Jacobian at the sample points, i.e. $J \equiv (\mathrm{diag}(\nabla^2 \log p(x^1)), \ldots (\mathrm{diag}(\nabla^2 \log p(x^m)))^\top \in \mathbb{R}^{m \times d}$ , the estimator of $J$ is: + +$$ +\hat {\boldsymbol {J}} = - \operatorname {d i a g} \left(\hat {\boldsymbol {G}} \hat {\boldsymbol {G}} ^ {\top}\right) + (\boldsymbol {K} + \eta \boldsymbol {I}) ^ {- 1} \langle \nabla_ {\mathrm {d i a g}} ^ {2}, \boldsymbol {K} \rangle \tag {7} +$$ + +where $\pmb{H} = (h(x^{1}),\dots,h(x^{m}))\in \mathbb{R}^{d^{\prime}\times m},\overline{\nabla_{\mathrm{diag}}^{2}\pmb{h}} = \frac{1}{m}\sum_{k = 1}^{m}\nabla_{\mathrm{diag}}^{2}\pmb {h}(x^{k}),(\nabla_{\mathrm{diag}}^{2}\pmb {h}(x))_{ij} = \frac{\partial^{2}h_{i}(x)}{\partial x_{j}^{2}},$ $\begin{array}{rlr}{\pmb {K}} & = & {\pmb{H}}^{\top}{\pmb{H}},\end{array}$ $\pmb {K}_{ij} = \kappa (x^i,x^j) = \pmb {h}(x^i)^{\top}\pmb {h}(x^j),\langle \nabla_{\mathrm{diag}}^2,\pmb {K}\rangle = m\pmb {H}^T\overline{\nabla_{\mathrm{diag}}^2\pmb{h}},$ $\langle \nabla_{\mathrm{diag}}^2,K\rangle_{ij} = \sum_{k = 1}^{m}\frac{\partial^2\kappa(x^i,x^k)}{(\partial x_j^k)^2}$ , and $\eta \geq 0$ is a regularization parameter. + +In the sequel, we use $\mathrm{SCORE}(\mathbf{X})$ to denote the procedure to compute the sample variance for the estimator of the diagonal of the score's Jacobian via eq.(7). + +# A.2 Practical Version of Algorithm 1 + +Let $\widehat{\mathrm{Var}}^h$ be a $d$ -dimensional vector, where $d$ is the number of nodes. We introduce a $d$ -dimensional vector $\mathrm{rank}^h$ , which represents the index of each element in $\widehat{\mathrm{Var}}^h$ after a non-decreasing sorting. For example, if $\widehat{\mathrm{Var}}^h = (5.2, 3.1, 4.5, 1.6)$ , then $\mathrm{rank}^h = (3, 1, 2, 0)$ . Furthermore, we define a $d$ -dimensional vector $\mathrm{rank}$ as the element-wise summation of $\mathrm{rank}^h$ over all $h \in [H]$ . In other words, $\mathrm{rank}$ is calculated as $\mathrm{rank} = \sum_{h \in [H]} \mathrm{rank}^h$ . + +Recall that in Section 3.1 we remarked that we leverage the SCORE approach from Rolland et al. [65] for estimating $\mathrm{diag}(\nabla^2\log p(x))$ at each data point. Recall also that our identifiability result (Theorem 1) depends on determining whether a leaf node has variance $\mathrm{Var}_q\left(\frac{\partial s_j(x)}{\partial x_j}\right) = 0$ . In practice, it is unrealistic to simply test for the equality $\mathrm{Var}_L = 0$ since $\mathrm{Var}_L$ carries out errors due to finite samples. Instead, we define the following statistic for each estimated leaf node $L$ (Line 10 in Algorithm 3): + +$$ +\operatorname {s t a t s} _ {L} = \frac {\operatorname {V a r} _ {L}}{\min _ {h} \operatorname {V a r} _ {L} ^ {h} + \epsilon}. \tag {8} +$$ + +Algorithm 3 Practical version of Algorithm 1 +Input: Datasets $X^1, \ldots, X^H$ , threshold $t$ +Output: Shifted variables set $\widehat{S}$ , and topological sort $\hat{\pi}$ . +1: Initialize $\widehat{S} = \emptyset$ , $\hat{\pi} = (\cdot), \mathcal{N} = \{1, \ldots, d\}$ +2: stats $\leftarrow (0, \ldots, 0) \in \mathbb{R}^d$ +3: Set $\mathbf{X} = [(\mathbf{X}^1)^\top | \cdots | (\mathbf{X}^H)^\top]^\top \in \mathbb{R}^{m \times d}$ . +4: while $\mathcal{N} \neq \emptyset$ do +5: $\forall h \in [H]$ , $\widehat{\mathrm{Var}}^h \gets \mathrm{SCORE}(\mathbf{X}^h)$ . +6: $\forall h \in [H]$ , $\mathrm{rank}^h \gets \mathrm{arg\_sort}(\widehat{\mathrm{Var}}^h)$ . +7: $\mathrm{rank} \gets \sum_{h \in [H]} \mathrm{rank}^h$ +8: $\widehat{L} \gets \mathrm{arg\_min}_j \mathrm{rank}_j$ +9: $\widehat{\mathrm{Var}} \gets \mathrm{SCORE}(\mathbf{X})$ +10: stats $_L = \frac{\widehat{\mathrm{Var}}_L}{\min_h \widehat{\mathrm{Var}}_L}$ +11: $\mathcal{N} \gets \mathcal{N} - \{L\}$ +12: Remove the $\widehat{L}$ -th column of $\mathbf{X}^h$ , $\forall h \in [H]$ , and $\mathbf{X}$ . +13: $\hat{\pi} \gets (\widehat{L}, \hat{\pi})$ . +14: $\widehat{S} = \{j | \mathrm{stats}_j > t, \forall j \in [d]\}$ + +The intuition behind this ratio is that if the leaf node $L$ is a shifted node then we can expect $\frac{\mathrm{Var}_L}{\min_h \mathrm{Var}_L^h}$ to be large since $\mathrm{Var}_L > 0$ (by Theorem 1), and $\mathrm{Var}_L^h \approx 0$ (by Proposition 1). On the other hand, if the leaf node $L$ is not a shifted node then we can expect $\frac{\mathrm{Var}_L}{\min_h \mathrm{Var}_L^h}$ to be small. This is due to the fact that, given a consistent estimator, $\mathrm{Var}_L$ would converge towards 0 (by Theorem 1) at a faster rate than $\mathrm{Var}_L^h$ since we utilize a larger amount of data for estimating $\mathrm{Var}_L$ . Finally, $\epsilon$ in the denominator is a very small value, e.g. $10^{-9}$ , and acts as a safeguard against encountering a division by zero6. + +Then, given the statistic in eq.(8), we can set a threshold $t$ and define the set of shifted nodes $S$ by all the nodes $j$ such that $\text{stats}_j > t$ (Line 14 in Algorithm 3). + +Remark 6 (The elbow strategy). Alternatively, we can employ an adaptive approach to identify the set of shifted nodes by sorting stats in non-increasing order, and look for the "elbow" point. For example, Figure 5 illustrates the variance ratio in (8) for each node sorted in non-increasing order. In this case, node index 5 corresponds to the elbow point, allowing us to estimate nodes 5 and 8 as shifted nodes. Identifying the elbow point has the advantage to detect shifted nodes without relying on a fixed threshold. + +# A.3 Algorithm details for FOCI + +Algorithm 4 Feature ordering by conditional independence (FOCI) +Input: Data $\widehat{\mathrm{Pre}} (\pmb {X}_j^h),\pmb {X}_j^h$ +Output: Estimated parents of $X_{j}$ $\widehat{\mathbb{P}\mathbb{A}}_j^h$ +1: $P\gets \emptyset$ +2: Let $T_{m}(i,j,P)\equiv T_{m}(\pmb{X}_{j}^{h},\pmb{X}_{i}^{h}\mid \pmb{X}_{P}^{h})\quad \triangleright T_{m}$ is the estimator in Azadkia and Chatterjee [3]. +3: while $\max_{X_i\notin P,X_i\in \widehat{\mathrm{Pre}} (X_j)}T_m(i,j,P) > 0$ do +4: $P\gets P\cup \left\{\arg \max_{X_i\notin P,X_i\in \widehat{\mathrm{Pre}} (X_j)}T_m(i,j,P)\right\}$ +5: $\widehat{\mathrm{PA}}_j^h\gets P$ + +![](images/d4b1bd75cb3a544fff70dd82fc796c695e439e59d55d1a50dedbece93a2f66dd.jpg) +Figure 5: Statistic in eq.(8) for each node sorted in non-increasing order. In this case, node index 5 corresponds to the elbow point, allowing us to estimate nodes 5 and 8 as shifted nodes. + +# B Detailed Proofs + +# B.1 Proof of Theorem 1 + +To prove Theorem 1 we will make use of the following lemmas. + +Lemma 1. Let $\{a_h\}_{h=1}^H$ and $\{b_h\}_{h=1}^H$ be two sequences of real numbers, where $a_h > 0, \forall h$ . Then we have: + +$$ +\left(\sum_ {h = 1} ^ {H} a _ {h} b _ {h} ^ {2}\right) \left(\sum_ {h = 1} ^ {H} a _ {h}\right) - \left(\sum_ {h = 1} ^ {H} a _ {h} b _ {h}\right) ^ {2} \geq 0, +$$ + +with equality if and only if $b_{i} = b_{j}, \forall j \neq i \in [H]$ . + +Proof. We can invoke the Cauchy-Schwarz inequality with vectors $\pmb{u} = (\sqrt{a_1},\dots ,\sqrt{a_H})$ , and $\pmb{v} = (b_{1}\sqrt{a_{1}},\ldots ,b_{H}\sqrt{a_{H}})$ , then we have: + +$$ +\left(\boldsymbol {u} ^ {\top} \boldsymbol {v}\right) ^ {2} \leq \| \boldsymbol {u} \| _ {2} ^ {2} \| \boldsymbol {v} \| _ {2} ^ {2}, +$$ + +which proves the inequality. The equality holds if and only if $\pmb{u}$ and $\pmb{v}$ are linearly dependent, i.e., when $b_{i} = b_{j}$ for all $i\neq j\in [H]$ . + +Lemma 2. For any $j$ , if $\mathbb{P}^h (X_j\mid \mathrm{PA}_j^h) = \mathbb{P}^{h'}(X_j\mid \mathrm{PA}_j^{h'})$ , then $c_{j}^{h} = c_{j}^{h'}$ . + +Proof. Denote the associated density of $\mathbb{P}^h (X_j\mid \mathrm{PA}_j^h)$ when $X_{j} = x_{j}$ as $p_{N_j}^h (x_j - f_j^h (\mathrm{PA}_j^h))$ and let $u = x_{j} - f_{j}^{h}(\mathrm{PA}_{j}^{h})$ + +$$ +\begin{array}{l} \frac {\partial^ {2}}{\partial (x _ {j}) ^ {2}} \log p _ {N _ {j}} ^ {h} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) \\ = \frac {\partial \log p _ {N _ {j}} ^ {h} (u)}{\partial u} \frac {\partial^ {2} u}{\partial (x _ {j}) ^ {2}} + \frac {\partial^ {2} \log p _ {N _ {j}} ^ {h} (u)}{\partial u ^ {2}} \left(\frac {\partial u}{\partial x _ {j}}\right) ^ {2} \\ = 0 + c _ {j} ^ {h} = c _ {j} ^ {h} \\ \end{array} +$$ + +where we use the fact that $\frac{\partial u}{\partial x_j} = 1, \frac{\partial^2 u}{\partial (x_j)^2} = 0$ . Then it immediately follows that if $\mathbb{P}^h(X_j \mid \mathrm{PA}_j^h) = \mathbb{P}^{h'}(X_j \mid \mathrm{PA}_j^{h'})$ , then $c_j^h = c_j^{h'}$ + +Lemma 3. For any $j$ , under Assumption B, $\frac{\partial}{\partial x_j} \log p_{N_j}^h (x_j - f_j^h (\mathrm{PA}_j^h)) = \frac{\partial}{\partial x_j} \log p_{N_j}^{h'}(x_j - f_j^{h'}(\mathrm{PA}_j^{h'}))$ if and only if $\mathbb{P}^h (X_j\mid \mathrm{PA}_j^h) = \mathbb{P}^{h'}(X_j\mid \mathrm{PA}_j^{h'})$ , where $p^h$ and $p^{h'}$ are the probability density functions corresponding to the probability measures $\mathbb{P}^h$ and $\mathbb{P}^{h'}$ when $X_{j} = x_{j}$ . + +Proof. Denote the associated density of $\mathbb{P}^h (X_j\mid \mathrm{PA}_j^h)$ when $X_{j} = x_{j}$ as $p_{N_j}^h (x_j - f_j^h (\mathrm{PA}_j^h))$ , we proceed as follows: + +$$ +\frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) = \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h ^ {\prime}} (x _ {j} - f _ {j} ^ {h ^ {\prime}} (\mathrm {P A} _ {j} ^ {h ^ {\prime}})) +$$ + +$$ +\begin{array}{l} \Longleftrightarrow \log p _ {N _ {j}} ^ {h} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) = \log p _ {N _ {j}} ^ {h ^ {\prime}} (x _ {j} - f _ {j} ^ {h ^ {\prime}} (\mathrm {P A} _ {j} ^ {h ^ {\prime}})) + c o n s t \\ \Longleftrightarrow p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) = p _ {N _ {j}} ^ {h ^ {\prime}} \left(x _ {j} - f _ {j} ^ {h ^ {\prime}} \left(\mathrm {P A} _ {j} ^ {h ^ {\prime}}\right)\right) \cdot e ^ {c o n s t} \\ \Rightarrow \int_ {\mathbb {R}} p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) \mathrm {d} x _ {j} = e ^ {\text {c o n s t}} \cdot \int_ {\mathbb {R}} p _ {N _ {j}} ^ {h ^ {\prime}} \left(x _ {j} - f _ {j} ^ {h ^ {\prime}} \left(\mathrm {P A} _ {j} ^ {h ^ {\prime}}\right)\right) \mathrm {d} x _ {j} \\ \Rightarrow \int_ {\mathbb {R}} p _ {N _ {j}} ^ {h} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) \mathrm {d} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) = e ^ {c o n s t} \cdot \int_ {\mathbb {R}} p _ {N _ {j}} ^ {h ^ {\prime}} (x _ {j} - f _ {j} ^ {h ^ {\prime}} (\mathrm {P A} _ {j} ^ {h ^ {\prime}})) \mathrm {d} (x _ {j} - f _ {j} ^ {h ^ {\prime}} (\mathrm {P A} _ {j} ^ {h ^ {\prime}})) \\ \Rightarrow 1 = 1 \cdot e ^ {c o n s t} \\ \Rightarrow c o n s t = 0 \\ \end{array} +$$ + +Here, $const$ is a constant that is independent of $x_{j}$ . Integrating both sides with respect to $x_{j}$ and using the fact that $\int p^{h}(x)\mathrm{d}x = 1$ , we conclude that $const = 0$ . Hence, we can establish the following: + +$$ +\begin{array}{l} \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) = \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h ^ {\prime}} (x _ {j} - f _ {j} ^ {h ^ {\prime}} (\mathrm {P A} _ {j} ^ {h ^ {\prime}})) \\ \Longleftrightarrow p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) = p _ {N _ {j}} ^ {h ^ {\prime}} \left(x _ {j} - f _ {j} ^ {h ^ {\prime}} \left(\mathrm {P A} _ {j} ^ {h ^ {\prime}}\right)\right) \\ \Longleftrightarrow \mathbb {P} ^ {h} \left(X _ {j} \mid \mathrm {P A} _ {j} ^ {h}\right) = \mathbb {P} ^ {h ^ {\prime}} \left(X _ {j} \mid \mathrm {P A} _ {j} ^ {h ^ {\prime}}\right) \\ \end{array} +$$ + +![](images/9faead956b053073fdb8c18a1f60e8254b038df0222e1193943f3313b8a75478.jpg) + +Proof of Theorem 1. Let us first expand the log density of the mixture distribution: + +$$ +\log q (x) = \log \left(\sum_ {h = 1} ^ {H} w _ {h} p ^ {h} (x)\right) +$$ + +Then, recall that $s(x) = \nabla \log q(x)$ , the $j$ -entry reads: + +$$ +\begin{array}{l} s _ {j} (x) = \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \left[ \frac {\partial}{\partial x _ {j}} \log p ^ {h} \left(x _ {j} \mid \mathrm {P A} _ {j} ^ {h}\right) + \sum_ {i \in \mathrm {C H} _ {j} ^ {h}} \frac {\partial}{\partial x _ {j}} \log p ^ {h} \left(x _ {i} \mid \mathrm {P A} _ {i} ^ {h}\right) \right] \\ = \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \left[ \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) + \sum_ {i \in \mathrm {C H} _ {j} ^ {h}} \frac {\partial}{\partial x _ {j}} \log p _ {N _ {i}} ^ {h} \left(x _ {i} - f _ {i} ^ {h} \left(\mathrm {P A} _ {i} ^ {h}\right)\right) \right] \tag {9} \\ \end{array} +$$ + +Condition (i). First we will prove condition (i). That is, given a leaf node $X_{j}$ in all DAGs $G^{h}$ , $X_{j}$ is not a shifted node (i.e. an invariant node) if and only if $\mathrm{Var}\left(\frac{\partial s_{j}(x)}{\partial x_{j}}\right) = 0$ . + +If $x_{j}$ is a leaf node in all the DAGs $G^{h}$ , then $\mathrm{CH}_j^h = \emptyset, \forall h \in [H]$ , and we can write eq.(9) as: + +$$ +s _ {j} (x) = \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) +$$ + +We use $\mathrm{Den}(\frac{\partial s_j(x)}{\partial x_j})$ and $\mathrm{Num}(\frac{\partial s_j(x)}{\partial x_j})$ to denote the denominator and numerator of $\frac{\partial s_j(x)}{\partial x_j}$ , respectively. Then we have: + +$$ +\mathrm {D e n} \left(\frac {\partial s _ {j} (x)}{x _ {j}}\right) = \left(\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)\right) ^ {2} +$$ + +$$ +\begin{array}{l} \operatorname {N u m} \left(\frac {\partial s _ {j} (x)}{x _ {j}}\right) = \left[ \sum_ {h = 1} ^ {H} w _ {h} p ^ {h} (x) \frac {\partial^ {2}}{\partial x _ {j} ^ {2}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) + w _ {h} p ^ {h} (x) \left(\frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right)\right) ^ {2} \right] \\ \times \left[ \sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x) \right] - \left[ \sum_ {h = 1} ^ {H} w _ {h} p ^ {h} (x) \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) \right] ^ {2} \\ \end{array} +$$ + +Now, dividing $\mathrm{Num}(\frac{\partial s_j(x)}{\partial x_j})$ over $\mathrm{Den}(\frac{\partial s_j(x)}{\partial x_j})$ , we obtain: + +$$ +\begin{array}{l} \frac {\partial s _ {j} (x)}{\partial x _ {j}} = \frac {\operatorname {N u m} (\frac {\partial s _ {j} (x)}{x _ {j}})}{\operatorname {D e n} (\frac {\partial s _ {j} (x)}{x _ {j}})} = \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \frac {\partial^ {2}}{\partial x _ {j} ^ {2}} \log p _ {N _ {j}} ^ {h} (x _ {j} - f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h})) \\ + \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \left(\frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right)\right) ^ {2} \\ - \left[ \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) \right] ^ {2} \tag {10} \\ \end{array} +$$ + +Note that since $x_{j} \notin \mathrm{PA}_{j}^{h}$ , the function $f_{j}^{h}(\mathrm{PA}_{j}^{h})$ is independent of $x_{j}$ . + +Let $a_{h} = w_{h}p^{h}(x)$ , and let $b_{h} = \frac{\partial}{\partial x_{j}}\log p_{N_{j}}^{h}(x_{j} - f_{j}^{h}(\mathrm{PA}_{j}^{h}))$ . Then, the last two summands of the RHS of eq.(10) can be written as: + +$$ +\frac {1}{\left(\sum_ {h = 1} ^ {H} a _ {h}\right) ^ {2}} \left[ \left(\sum_ {h = 1} ^ {H} a _ {h} b _ {h} ^ {2}\right) \left(\sum_ {h = 1} ^ {H} a _ {h}\right) - \left(\sum_ {h = 1} ^ {H} a _ {h} b _ {h}\right) ^ {2} \right] \geq 0, \tag {11} +$$ + +where the last inequality holds from Lemma 1. Then, by Lemma 3, we have that $b_h = b_{h'} \Longleftrightarrow \mathbb{P}^h(X_j \mid \mathrm{PA}_j^h) = \mathbb{P}^{h'}(X_j \mid \mathrm{PA}_j^{h'})$ for all $h, h' \in [H]$ . Then if $b_h = b_{h'}$ holds, by Lemma 2, we have $c_j^h = c_j^{h'} \coloneqq c_j$ and then the first term for eq.(10) boils down to a constant $c_j$ . Finally, from Lemma 1, we have that equality in eq.(11) holds if and only if $b_h = b_{h'}$ for all $h, h' \in [H]$ . Thus, we conclude that: + +$$ +\text {I f} X _ {j} \text {i s a l e a f n o d e f o r a l l} G ^ {h}, \text {t h e n} X _ {j} \text {i s n o t a s h i f t e d n o d e} \Longleftrightarrow \frac {\partial s _ {j} (x)}{\partial x _ {j}} = c _ {j}, +$$ + +where $\frac{\partial s_j(x)}{\partial x_j} = c_j$ is equivalent to $\operatorname{Var}_q\left(\frac{\partial s_j(x)}{\partial x_j}\right) = 0$ . + +Condition (ii). We now prove that if $\operatorname{Var}_q\left(\frac{\partial s_j(x)}{\partial x_j}\right) > 0$ , then only one of the following two cases holds: Case 1) $X_{j}$ is a leaf node for all $G^{h}$ and a shifted node. Case 2) $X_{j}$ is not a leaf node in at least one DAG $G^{h}$ . + +Case 1 follows immediately from the proof of condition (i) above. + +For Case 2, we study whether there exists a non-leaf node $X_{j}$ with $\operatorname{Var}_{q}\left(\frac{\partial s_{j}(x)}{\partial x_{j}}\right) = 0$ . Taking the partial derivative of $s_{j}(x)$ in eq.(9) w.r.t. $x_{j}$ , we have: + +$$ +\begin{array}{l} \frac {\partial s _ {j} (x)}{\partial x _ {j}} = \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \left(\frac {\partial^ {2}}{\partial x _ {j} ^ {2}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) + \sum_ {i \in \mathrm {C H} _ {j} ^ {h}} \frac {\partial^ {2}}{\partial x _ {j} ^ {2}} \log p _ {N _ {i}} ^ {h} \left(x _ {i} - f _ {i} ^ {h} \left(\mathrm {P A} _ {i} ^ {h}\right)\right)\right) \\ + \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \left(\frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) + \sum_ {i \in \mathrm {C H} _ {j} ^ {h}} \frac {\partial}{\partial x _ {j}} \log p _ {N _ {i}} ^ {h} \left(x _ {i} - f _ {i} ^ {h} \left(\mathrm {P A} _ {i} ^ {h}\right)\right)\right) ^ {2} \\ \end{array} +$$ + +$$ +\left. \right. - \left[ \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \left(\frac {\partial}{\partial x _ {j}} \log p _ {N _ {j}} ^ {h} \left(x _ {j} - f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right)\right) + \sum_ {i \in \mathrm {C H} _ {j} ^ {h}} \frac {\partial}{\partial x _ {j}} \log p _ {N _ {i}} ^ {h} \left(x _ {i} - f _ {i} ^ {h} \left(\mathrm {P A} _ {i} ^ {h}\right)\right)\right)\right] ^ {2} +$$ + +By Assumptions B, we have $\frac{\partial^2}{\partial x_j^2}\log p_{N_j}^h (x_j - f_j^h (\mathrm{PA}_j^h)) = c_j^h$ . For simplicity, let $a_{h} = \frac{\partial}{\partial x_{j}}\log p_{N_{j}}^{h}(x_{j} - f_{j}^{h}(\mathrm{PA}_{j}^{h})) + \sum_{i\in \mathrm{CH}_{j}^{h}}\frac{\partial}{\partial x_{j}}\log p_{N_{i}}^{h}(x_{i} - f_{i}^{h}(\mathrm{PA}_{i}^{h}))$ . Then, we have: + +$$ +\begin{array}{l} \frac {\partial s _ {j} (x)}{\partial x _ {j}} = \sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} c _ {j} ^ {h} + \underbrace {\sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} \sum_ {i \in \mathrm {C H} _ {j} ^ {h}} \frac {\partial^ {2}}{\partial x _ {j} ^ {2}} \log p _ {N _ {i}} ^ {h} \left(x _ {i} - f _ {i} ^ {h} \left(\mathrm {P A} _ {i} ^ {h}\right)\right)} _ {\text {t e r m 1}} \\ + \underbrace {\sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} a _ {h} ^ {2} - \left(\sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} a _ {h}\right) ^ {2}} _ {\text {t e r m 2}}. \tag {12} \\ \end{array} +$$ + +We prove that $\frac{\partial^2}{\partial x_j^2}\log p_{N_i}^h (x_i - f_i^h (\mathrm{PA}_i^h))$ , is not constant under any circumstance, by contradiction. Let $G^{h}$ be an environment's DAG where $X_{j}$ is not a leaf, and let $X_{u}\in \mathrm{CH}_{j}^{h}$ such that $X_{u}\notin \cup_{i\in \mathrm{CH}_{j}^{h}}\mathrm{PA}_{i}^{h}$ . Note that $X_{u}$ always exist since $X_{j}$ is not a leaf, and it suffices to pick a child $X_{u}$ appearing at the latest position in the topological order of $G^{h}$ . Now suppose that $\frac{\partial^2}{\partial x_j^2}\log p_{N_u}^h (x_u - f_u^h (\mathrm{PA}_u^h)) = a$ , where $a$ is a constant. Then we have: + +$$ +\frac {\partial}{\partial x _ {j}} \log p _ {N _ {u}} ^ {h} (x _ {u} - f _ {u} ^ {h} (\mathrm {P A} _ {u} ^ {h})) = a x _ {j} + g (x _ {- j}), +$$ + +$$ +\frac {\partial}{\partial x _ {j}} f _ {u} ^ {h} (\mathrm {P A} _ {u} ^ {h}) \cdot \frac {\partial}{\partial n _ {u}} \log p _ {N _ {u}} ^ {h} (n _ {u}) = a x _ {j} + g (x _ {- j}). +$$ + +By deriving both sides w.r.t. $x_{u}$ , we obtain: + +$$ +\frac {\partial}{\partial x _ {j}} f _ {u} ^ {h} \left(\mathrm {P A} _ {u} ^ {h}\right) \cdot \frac {\partial^ {2}}{\partial n _ {u} ^ {2}} \log p _ {N _ {u}} ^ {h} \left(n _ {u}\right) = \frac {\partial g \left(x _ {- j}\right)}{\partial x _ {u}} +$$ + +$$ +\frac {\partial}{\partial x _ {j}} f _ {u} ^ {h} \left(\mathrm {P A} _ {u} ^ {h}\right) \cdot c _ {j} ^ {h} = \frac {\partial g (x _ {- j})}{\partial x _ {u}}. +$$ + +Since the RHS does not depend on $x_{j}$ , then $\frac{\partial f_u^h}{\partial x_j}$ cannot depend on $x_{j}$ neither, implying that $f_{u}^{h}$ is linear in $x_{j}$ , thus contradicting the non-linearity assumption (Assumption A). Consequently, it becomes evident that term 1 cannot be a constant, regardless of whether the node $X_{j}$ has undergone a shift or not. + +Now let us take a look to term 2 in eq.(12). We have: + +$$ +\sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} a _ {h} ^ {2} - \left(\sum_ {h = 1} ^ {H} \frac {w _ {h} p ^ {h} (x)}{\sum_ {k = 1} ^ {H} w _ {k} p ^ {k} (x)} a _ {h}\right) ^ {2} \geq 0, +$$ + +where the inequality follows by Jensen's inequality. Thus we conclude that if $X_{j}$ is a non-leaf node, we have $\operatorname{Var}_q\left(\frac{\partial s_j(x)}{\partial x_j}\right) > 0$ . + +# B.2 Proof of Theorem 2 + +To proof the theorem we will need the following assumptions: + +Assumption C. Let $\mathrm{MB}_j^h$ denote the Markov Blanket of node $X_{j}$ under environment $h$ , then assume + +- There are non-negative real numbers $\beta$ and $C$ such that for any subset $X_{S} \subseteq \operatorname{Pre}(X_{j}^{h})$ of size $\leq 1 / \delta + 2$ , any $x, x' \in \mathbb{R}^{|X_{S}|}$ and any $t \in \mathbb{R}$ , + +$$ +\mid \mathbb {P} (X _ {j} ^ {h} \geq t \mid X _ {\boldsymbol {S}} = x) - \mathbb {P} (X _ {j} ^ {h} \geq t \mid X _ {\boldsymbol {S}} = x ^ {\prime}) \mid \leq C (1 + \| x \| ^ {\beta} + \| x ^ {\prime} \| ^ {\beta}) \| x - x ^ {\prime} \| +$$ + +where $|X_{\mathcal{S}}|$ is the size of the set $X_{\mathcal{S}}$ . + +- There are positive numbers $C_1$ and $C_2$ such that for any $X_S$ of size $\leq 1 / \delta + 2$ and any $t > 0$ , $\mathbb{P}(\|X_S\| \geq t) \geq C_1 e^{-C_2 t}$ + +- For any subset $X_{S} \subseteq \operatorname{Pre}(X_{j}^{h})$ such that $X_{S} \subsetneq \mathrm{MB}_{j}^{h}$ , there exists $X_{i}$ with $X_{i} \in \mathrm{MB}_{j}^{h} \backslash X_{S}$ , such that for any $X_{j}$ with $X_{j} \notin \mathrm{MB}_{j}^{h}$ , + +$$ +Q \left(X _ {S} \cup \left\{X _ {i} \right\}\right) - Q \left(X _ {S} \cup \left\{X _ {j} \right\}\right) \geq \delta / 4 \quad Q \left(X _ {S}\right) = \int \operatorname {V a r} \left(\mathbb {P} \left(X _ {j} ^ {h} \geq t \mid X _ {S}\right)\right) d \mu (t) +$$ + +where $\delta$ is the largest number such that for any subset $X_{S}$ from $\operatorname{Pre}(X_j^h)$ , there is some $X_i \notin X_S$ such that $Q(X_S \cup \{X_i\}) \geq Q(X_S) + \delta$ . + +Proof. Under Assumption C, from Theorem 3.1 in [4], we have + +$$ +\mathbb {P} \left(\widehat {\mathrm {M B}} _ {j} ^ {h} = \mathrm {M B} _ {j} ^ {h}\right) \geq 1 - C _ {3} e ^ {- C _ {4} m} +$$ + +where $C_3$ and $C_4$ are constants that depend only on the data generation process, and $m$ is the number of samples. Since the estimated topological order $\hat{\pi}$ is assumed to be valid for all environments, we can conclude that node $X_j$ is a leaf node in the input data $\{\mathrm{Pre}(X_j^h), X_j^h\}$ for all $h$ . As a result, we have $\mathrm{MB}_j^h = \mathrm{PA}_j^h$ based on the Markov blanket definition. Therefore, the output of Algorithm 4 is equal to the true parent set $\mathrm{PA}_j^h$ with high probability. + +# B.3 Proof of Theorem 3 in Appendix D + +To prove Theorem 3 we will make use of the following lemmas. + +Lemma 4. Suppose $\mathbf{X}$ is an $n \times k_x$ dimension matrix, $\mathbf{Y}$ is an $n \times k_y$ dimension matrix. Let the columns of the concatenated matrix $\mathbf{Z} = (\mathbf{X},\mathbf{Y})$ be linearly independent. Consider $\tilde{\beta}_x$ , a $k_x$ -dimensional vector, and $\tilde{\beta}_y$ and $\beta_y$ , both $k_y$ -dimensional vectors. If $\mathbf{X}\tilde{\beta}_x + \mathbf{Y}\tilde{\beta}_y = \mathbf{Y}\beta_y$ , then it follows that $\tilde{\beta}_y = \beta_y$ and $\tilde{\beta}_x = 0$ . + +Proof. + +$$ +\boldsymbol {X} \tilde {\beta} _ {x} + \boldsymbol {Y} \tilde {\beta} _ {y} - \boldsymbol {Y} \beta_ {y} = (\boldsymbol {X}, \boldsymbol {Y}) \left( \begin{array}{c} \tilde {\beta} _ {x} \\ \tilde {\beta} _ {y} \end{array} \right) - (\boldsymbol {X}, \boldsymbol {Y}) \left( \begin{array}{c} \mathbf {0} _ {k _ {x}} \\ \beta_ {y} \end{array} \right) = \boldsymbol {Z} \left( \begin{array}{c} \tilde {\beta} _ {x} \\ \tilde {\beta} _ {y} - \beta_ {y} \end{array} \right) = 0 +$$ + +Since $Z$ has full column rank, then the null space of $Z$ is $\mathbf{0}$ , which implies $\tilde{\beta}_x = \mathbf{0}$ , $\tilde{\beta}_y = \beta_y$ . + +Lemma 5. For any $h \in [H]$ , if + +$$ +\sum_ {k \in \operatorname {P r e} (X _ {j})} \Psi_ {j k} \tilde {\beta} _ {j k} ^ {h} = \sum_ {k \in \mathrm {P A} _ {j} ^ {h}} \Psi_ {j k} \beta_ {j k} ^ {h}, +$$ + +then $\tilde{\beta}_{jk}^{h} = \beta_{jk}^{h}$ if $k\in \mathrm{PA}_j^h$ , and $\tilde{\beta}_{jk}^{h} = 0$ if $k\notin \mathrm{PA}_j^h$ + +Proof. Rearrange the set $\operatorname{Pre}(X_j)$ so that $\operatorname{Pre}(X_j) = \{X_{k_1}, X_{k_2}, \ldots, X_{k_m}, X_{k_{m+1}}, \ldots, X_{k_p}\}$ , where $\{X_{k_1}, \ldots, X_{k_m}\} = \operatorname{Pre}(X_j) \setminus \mathrm{PA}_j$ , and $\{X_{k_{m+1}}, \ldots, X_{k_p}\} = \mathrm{PA}_j$ . Then let $\mathbf{X} = (\Psi_{k_1}, \ldots, \Psi_{k_m})$ , $\mathbf{Y} = (\Psi_{k_{m+1}}, \ldots, \Psi_{k_p})$ , and $\mathbf{Z} = (\mathbf{X}, \mathbf{Y})$ . By the linear independence property of the basis functions of $\operatorname{Pre}(X_j)$ , we have that the columns of $\mathbf{Z}$ are linearly independent. Also, let + +$$ +\tilde {\beta} _ {x} = \left( \begin{array}{c} \tilde {\beta} _ {j k _ {1}} ^ {h} \\ \vdots \\ \tilde {\beta} _ {j k _ {m}} ^ {h} \end{array} \right), \quad \tilde {\beta} _ {y} = \left( \begin{array}{c} \tilde {\beta} _ {j k _ {m + 1}} ^ {h} \\ \vdots \\ \tilde {\beta} _ {j k _ {p}} ^ {h} \end{array} \right), \quad \beta_ {y} = \left( \begin{array}{c} \beta_ {j k _ {m + 1}} ^ {h} \\ \vdots \\ \beta_ {j k _ {p}} ^ {h} \end{array} \right) +$$ + +Then we must have: + +$$ +\sum_ {k \in \operatorname {P r e} (X _ {j})} \Psi_ {j k} \tilde {\beta} _ {j k} ^ {h} = \sum_ {k \in \mathrm {P A} _ {j} ^ {h}} \Psi_ {j k} \beta_ {j k} ^ {h} \quad \Rightarrow \quad \boldsymbol {X} \tilde {\beta} _ {x} + \boldsymbol {Y} \tilde {\beta} _ {y} = \boldsymbol {Y} \beta_ {y}. +$$ + +Then by Lemma 4, we have $\tilde{\beta}_{jk}^{h} = \beta_{jk}^{h}$ if $k\in \mathrm{PA}_j^h$ $\tilde{\beta}_{jk}^{h} = 0$ if $k\notin \mathrm{PA}_j^h$ + +![](images/6f8fa3ce374dd6f607f90020f9f132bc9f27131fb401bee21d9d5bae690010d6.jpg) + +Proof of Theorem 3. We know that $\operatorname{Pre}(X_j)$ contains the ancestors of $X_j$ . Then, in environment $h$ , we have: + +$$ +\begin{array}{l} \mathbb {E} _ {p ^ {h}} \left[ X _ {j} \mid \operatorname {P r e} \left(X _ {j}\right) \right] = \mathbb {E} _ {p ^ {h}} \left[ f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right) \mid \operatorname {P r e} \left(X _ {j}\right) \right] + \mathbb {E} _ {p ^ {h}} \left[ N _ {j} \mid \operatorname {P r e} \left(X _ {j}\right) \right] \\ = f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h}), \\ \end{array} +$$ + +where the last equality follows since the first conditional expectation is equal to $f_{j}^{h}(\mathrm{PA}_{j})$ , due to $\mathrm{PA}_j\subseteq \operatorname {Pre}(X_j)$ . Moreover, in the second conditional expectation term, we have that $N_{j}$ is marginally independent of $\operatorname {Pre}(X_j)$ by the d-separation criterion. Thus the conditional expectation of $N_{j}$ equals to the marginal expectation of $N_{j}$ , which is 0. Finally, + +$$ +\sum_ {k \in \operatorname {P r e} (X _ {j})} \Psi_ {j k} \tilde {\beta} _ {j k} ^ {h} = \mathbb {E} _ {p ^ {h}} [ X _ {j} \mid \operatorname {P r e} (X _ {j}) ] = f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h}) = \sum_ {k \in \mathrm {P A} _ {j} ^ {h}} \Psi_ {j k} \beta_ {j k} ^ {h} +$$ + +By Lemma 5, we have $\tilde{\beta}_{jk}^{h} = \beta_{jk}^{h}$ if $k\in \mathrm{PA}_j^h$ $\tilde{\beta}_{jk}^{h} = 0$ if $k\notin \mathrm{PA}_j^h$ + +![](images/502630de3c88a178d63ce1bbeded613da51ab1e94390c8dcb4f442b072b1389d.jpg) + +# C Additional Experiments + +This section provides a thorough evaluation of the pipeline of our method. We begin by assessing the performance of our method in detecting the shifted nodes. Subsequently, we extend the evaluation to include the recovery of the structurally shifted edges. + +# C.1 Experiments on detecting shifted nodes + +Graph models. We ran experiments by generating adjacency matrices using the Erdős–Rényi (ER) and Scale free (SF) graph models. For a given number of variables $d$ , ERk and SFk indicate an average number of edges equal to $kd$ . + +Data generation process. We first sampled a Directed Acyclic Graph (DAG) according to either the Erdős-Rényi (ER) model or the Scale-Free (SF) model for environment $\mathcal{E}_1$ . + +For environment $\mathcal{E}_2$ , we used the same DAG structure as in environment $\mathcal{E}_1$ , ensuring a direct comparison between the two environments. To introduce artificial shifted nodes, we randomly selected $0.2 \cdot d$ nodes from the non-root nodes, where $d$ represents the total number of nodes in the DAG. These selected nodes were considered as the "shifted nodes," denoted as $S$ , with $|S| = 0.2d$ . + +The functional relationship between a node $X_{j}$ and its parents in environment $\mathcal{E}_1$ was defined as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j}} \sin (X _ {i} ^ {2}) + N _ {j}, +$$ + +while for environment $\mathcal{E}_2$ , we defined the functional relationships between each node and its parents by: + +$$ +X _ {j} = \left\{ \begin{array}{l l} \sum_ {i \in \mathrm {P A} _ {j}} \sin (X _ {i} ^ {2}) + N _ {j}, & \text {i f} X _ {j} \notin S, \\ \sum_ {i \in \mathrm {P A} _ {j}} 4 \cos (2 X _ {i} ^ {2} - 3 X _ {i}) + N _ {j}, & \text {i f} X _ {j} \in S. \end{array} \right. +$$ + +Experiment detail. In each simulation, we generated 500 data points, with the variances of the noise set to 1. We conducted 30 simulations for each combination of graph type, noise type, and number of nodes. The running time was recorded by executing the experiments on an Intel Xeon Gold 6248R Processor with 8 cores. For our method, we used the hyperparameters eta_G = 0.005, eta_H = 0.005, and threshold $t = 2$ (see Algorithm 3). + +Evaluation. We conducted a comparative analysis to evaluate the performance of our method in detecting shifted nodes compared to DCI. The evaluation was based on F1 score, precision, and recall as the evaluation metrics. Furthermore, we examined the robustness of our method by conducting tests using Gumbel and Laplace as noise distributions. + +Figures 6, 7, and 8 illustrate our method's performance across varying numbers of nodes and sparsity levels of the graphs. Our method consistently outperformed DCI in terms of F1 score, precision, and recall. + +![](images/202c891e7e384e91b9683f3dbd37bf9ec96035fb35af04685775b8817b499258.jpg) +Figure 6: Shifted nodes detection in ER2 and SF2 graphs. For each point, we conducted 30 simulations as described in Section C.1. The points indicate the average values obtained from these simulations, while the error bars depict the standard errors. For each simulation, 500 samples were generated. Our method iSCAN (green) consistently outperformed DCI (red) in terms of F1 score, precision, and recall. + +![](images/0dc641c04c7166702f5ae98e82c212480e0ae15a5d56ed0e440c3312770dd8bd.jpg) + +![](images/bc4129d02692d0cb79c20de65745ce310a43837a679a78993c3d251cec82a907.jpg) + +![](images/7682167effdd24d3c870ae562d3f98df1e7c6b0e4e3847ffc56a23835ac17d85.jpg) +Methods DCI iSCAN (ours) + +![](images/6327b51a6d5852a15155dac320580b7ffc0c74b9b25deac2dd088e5146845ad9.jpg) + +![](images/7815d8317a5b891f1299cf1775738bcb954866700bc1940a2c700c1548116392.jpg) +Figure 7: Shifted nodes detection in ER4 and SF4 graphs. For each point, we conducted 30 simulations as described in Section C.1. The points indicate the average values obtained from these simulations, while the error bars depict the standard errors. For each simulation, 500 samples were generated. Our method iSCAN (green) consistently outperformed DCI (red) in terms of F1 score, precision, and recall. + +![](images/d8d834fa16751add9f0e5e53e640b34c56e61018bcda0e8aa140928cbe5d8b52.jpg) + +![](images/35cf754611e305c1190b111e9e861db79b5c1a377c423fdce9758ff0a2fc467f.jpg) +Methods DCI iSCAN (ours) + +![](images/0c6cababc294e525773240a94368ca21ed3475a0d98df5a33f84270f3a5fdf7d.jpg) +Figure 8: Shifted nodes detection in ER6 and SF6 graphs. For each point, we conducted 30 simulations as described in Section C.1. The points indicate the average values obtained from these simulations, while the error bars depict the standard errors. For each simulation, 500 samples were generated. Our method iSCAN (green) consistently outperformed DCI (red) in terms of F1 score, precision, and recall. + +# C.1.1 Experiments in detecting shifted nodes from Gaussian process + +Data generation process. We first sampled a DAG according to the ER or SF model. In our experiment, we considered two environments, $\mathcal{E}_1$ and $\mathcal{E}_2$ , with the same DAG structure. Each node in the graph had a functional relationship with its parents defined as $X_{j} = f_{j}^{h}(\mathrm{PA}_{j}^{h}) + N_{j}$ , where $N_{j}$ is an independent standard Gaussian variable. Recall that the superscript $h$ denotes the function for environment $\mathcal{E}_h$ . + +To introduce shifted nodes, we randomly selected $0.2 \cdot d$ nodes from the non-root nodes, denoted as $S$ , to be the shifted nodes. In other words, $|S| = 0.2d$ . For the non-shifted nodes $X_{j}$ (i.e $j \notin S$ ), we set $f_{j}^{1} = f_{j}^{2}$ . However, for each shifted node $X_{j}$ in $S$ , we changed its functional relationship with its parents to $X_{j} = 2 \cdot f_{j}^{2}(\mathrm{PA}_{j}^{2}) + N_{j}$ . + +To test our method in a more general setting involving nonlinear functions, we followed the approach in [65, 39]. Specifically, for non-shifted nodes, we generated the link functions $f_{j}^{1}$ by sampling Gaussian processes with a half unit bandwidth RBF kernel, and we set $f_{j}^{2} = f_{j}^{1}$ . For shifted nodes, $X_{j} \in S$ , we generated the link functions $f_{j}^{1}$ and $f_{j}^{2}$ by sampling Gaussian processes with a half unit bandwidth RBF kernel independently. This allowed us to simulate different functional relationships for the shifted nodes across the two environments. + +Experiment detail. In each simulation, we generated 1000 data points, with the variances of the noise set to 1. We conducted 30 simulations for each combination of graph type, noise type, and number of nodes. The running time was recorded by executing the experiments on an Intel Xeon Gold 6248R Processor with 8 cores. For our method, we used the hyperparameters eta_G = 0.005, eta_H = 0.005, and elbow = True (see Remark 6). + +Evaluation. We conducted a comparative performance analysis between our proposed Algorithm 1 (iSCAN, green) and the DCI (red) method. The results for ER2 and SF2 graphs under Gaussian, Gumbel, and Laplace noise distributions are shown in Figure 9. In certain cases, our method may underperform DCI in terms of precision, resulting in a lower F1 score. However, it is important to note that our method consistently outperforms DCI in terms of recall score. + +Furthermore, Figure 10 and Figure 11 present the results for ER4/SF4, and ER6/SF6 graphs. In terms of precision, our method exhibits competitive performance and, in many cases, outperforms DCI. Notably, iSCAN consistently surpasses DCI in terms of recall score and F1 score. + +These findings emphasize the strengths of our proposed method in accurately detecting shifted nodes and edges, particularly in terms of recall and overall performance. In denser graphs, our method demonstrates a superior ability to recover shifted nodes compared to DCI. This suggests that our method is well-suited for scenarios where the graph structure is more complex and contains a larger number of nodes and edges. The improved performance of our method in such settings further highlights its potential in practical applications and its ability to handle more challenging tasks. + +Top-k precision. We have observed that in some cases, the precision of our method underperformed DCI. We attribute this to the elbow method rather than $\text{stats}_L$ . To further investigate this, we conducted an analysis using only $\text{stats}_L$ and measured the precision based on different criteria. Specifically, we identified nodes as shifted if their $\text{stats}_L$ ranked first, first two, or within the top k, denoted as top-1 precision, top-2 precision, and top-k precision, respectively, where $k = |S|$ . + +Figure 12 presents the results of precision for top-1, top-2, and top-k criteria under various graph models and noise combinations. In most cases, the precision exceeds $80\%$ and even approaches $100\%$ . These results indicate that when using $\mathrm{stats}_L$ alone, our method still provides accurate information about shifted nodes. The findings suggest that the lower precision observed in Figure 9 can be attributed to the elbow strategy rather than the effectiveness of $\mathrm{stats}_L$ . Overall, this analysis strengthens the reliability and usefulness of $\mathrm{stats}_L$ in accurately identifying shifted nodes in our method. + +![](images/5da9a83a3ae6944e06834e77e228f2381b0b8dfd64d093989a11abee2cfed3d4.jpg) + +![](images/95969b97bd13918e623f9249e384a1c31c313300b5bc5d0997dae95ff775d01c.jpg) + +Figure 9: Experiments on detection of shifted nodes in ER2/SF2 graphs using Gaussian processes. Details described in Appendix C.1.1. The error bars represent the standard errors. +![](images/68008f2cb57ea7de927bc50784bb38835b029f8663b7dbbcdb5643f40030fd0f.jpg) +Methods DCI iSCAN (ours) + +![](images/252f4ec51a4eebff1524668f3ed54eadb4f3c5509e24b6eff00f4d430ccf2c43.jpg) + +![](images/4e28cc5fcb0364914c3fd803951d3b1254bb25c5df8cc32e2c3b49452ceabefe.jpg) + +![](images/e9c0573829688c2727557555129c92bf7cb981d7e7770d2c59a7b53f4c0dd9bc.jpg) + +Figure 10: Experiments on detection of shifted nodes in ER4/SF4 graphs using Gaussian processes. Details described in Appendix C.1.1. The error bars represent the standard errors. +![](images/a53741d3d5d07834cb41b57c9a7f854bd6188af98f0e0c76008eefbf381f51eb.jpg) +Methods DCI iSCAN (ours) + +![](images/c622da8685e33d155677895a9280e19cda8b312f77327fa1f8caf6fb3bf84735.jpg) + +![](images/cd6fed7e216d0b8841d671d329e4dab9034a23ed2abed9eccea72377ba0fce4a.jpg) + +![](images/487c186be7d0e127445032c21310e4b86c30084b13734206571b44807b8041ec.jpg) + +Figure 11: Experiments on detection of shifted nodes in ER6/SF6 graphs using Gaussian processes. Details described in Appendix C.1.1. The error bars represent the standard errors. +![](images/915e58475215c96a63346a4e333a2cd142f9dcd15a245bb9a78878bb97611a6a.jpg) +Methods DCI iSCAN (ours) + +![](images/0bb4c685cf1590154d5608df5c41b8ce489c1560fab756d11c0e93d728b147b6.jpg) + +![](images/0e5905c13b15c840f2140713362f48de9d9bb6fad6abae99155c852ad7073ea3.jpg) +Figure 12: Top 1, 2 and K performance of iSCAN where functionals are sampled from Gaussian processes. Details described in Appendix C.1.1. The error bars represent the standard errors. + +![](images/29227d3efaaa60afd2789650c9332f4bdba027ff29068304d34676c9d93055b4.jpg) +Methods iSCAN (ours) + +![](images/f7bc33d41e25e9bd09470827b144eb4c6adc35d0d51d6a6190ff66755b0e8b6f.jpg) + +# C.2 Experiments on estimating structural shifts + +Data generation. We first sampled a DAG, $G^{1}$ , of $d$ nodes according to either the ER or SF model for env. $\mathcal{E}_1$ . For env. $\mathcal{E}_2$ , we initialized its DAG structure from env. $\mathcal{E}_1$ and produced structural changes by randomly selecting $0.2 \cdot d$ nodes from the non-root nodes. This set of selected nodes $S$ with cardinality $|S| = 0.2d$ , correspond to the set of "shifted nodes". In env. $\mathcal{E}_2$ , for each shifted node $X_{j} \in S$ , we uniformly at random deleted at most three of its incoming edges, and use $D_{j}$ to denote the parents whose edges to $X_{j}$ were deleted; thus, the DAG $G^{2}$ is a subgraph of $G^{1}$ . Then, in $\mathcal{E}_1$ , each $X_{j}$ was defined as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j} ^ {1} \backslash D _ {j}} \sin \left(X _ {i} ^ {2}\right) + \sum_ {i \in D _ {j}} 4 \cos \left(2 X _ {i} ^ {2} - 3 X _ {i}\right) + N _ {j} +$$ + +In $\mathcal{E}_2$ , each $X_{j}$ was defined as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j} ^ {2}} \sin (X _ {i} ^ {2}) + N _ {j} +$$ + +Experiment details. For each simulation, we generated 500 data points per environment, i.e., $m_{1} = 500$ , $m_{2} = 500$ and $m = 1000$ . The noise variances were set to 1. We conducted 30 simulations for each combination of graph type (ER or SF), noise type (Gaussian, Gumbel, and Laplace), and number of nodes ( $d \in \{10, 20, 30, 50\}$ ). The running time was recorded by executing the experiments on an Intel Xeon Gold 6248R Processor with 8 cores. For our method, we used $\eta = 0.05$ for eq.(6) and eq.(7), and a threshold $t = 2$ (see Alg. 3). + +In the case of the method introduced by Budhathoki et al. [11], we employed Kernel Conditional Independence (KCI) tests [86] for conducting conditional independence tests. As for CITE, KCD, UT-IGSP, and SCORE, we used their respective default parameter settings provided within their packages. Additionally, for SCORE, we employed it to estimate the DAGs independently for different environments and then compared the recovered DAGs to identify the shifted nodes. Given that Budhathoki's and KCD methods require information about the parents $\mathrm{PA}_j$ for each node $X_{j}$ , we employed the SCORE method to find the parent sets $\mathrm{PA}_j$ . + +Evaluation. In this experiment, we assessed the performance of our method in two aspects: detecting shifted nodes and recovering the structural changes (difference DAG). For the evaluation of shifted node detection, we measured F1 score, recall, and precision. In the evaluation of difference DAG recovery, we compared the estimated difference DAG with the ground truth difference DAG using F1 score. Additionally, we considered the running time of the methods as another evaluation criterion. + +Figures 13, 14, and 15 illustrate our method's performance in detecting shifted nodes across varying numbers of nodes and sparsity levels of the graphs. Our method consistently outperformed baselines in terms of F1 score, precision, and recall. + +Figures 16 showcase the performance of our method in recovering difference DAG across different noise distribution, different numbers of nodes and sparsity levels in the graphs. Our method achieves higher F1 score in recovering the difference DAG compared with DCI. + +![](images/d8815a62ce923d1928ac0130c7490b26947304e212feba312f52419b283c975d.jpg) + +![](images/ff39f1a10ef9c8e73f1f4d4f4c2117cfcb7446b2f1d6adc73733e8cb6bdb2b13.jpg) + +Figure 13: Shifted nodes detection performance in ER2/SF2. See App. C.2 for experimental details. iSCAN (light blue) consistently outperformed baselines in terms of F1 score, precision, and recall. +![](images/d9260910a765ebeaf2b76db44e9c3711f856e24c3e2498977cd1cce1f86ef5ad.jpg) +Methods Budathokl CITE DCI KCD iSCAN (ours) SCORE UT-IGSP + +![](images/bdc6d41b0f9362f199749aad0238777abb02ceb92d19ac99d8868efc243bf1cb.jpg) + +![](images/b72e346de4bd1aef3e19641b32a8fe677fc8cb21e8a87b52b187db1fcf135229.jpg) + +![](images/cf6f042925b1762c0eea62b133762d654a342d3d1dc36cc8bf80a009b015be27.jpg) + +Figure 14: Shifted nodes detection performance in ER4/SF4. See App. C.2 for experimental details. iSCAN (light blue) consistently outperformed baselines in terms of F1 score, precision, and recall. +![](images/368b1ceece95235c174f7212140dfae168e3ec08abf6e87268d866f6155ca5ae.jpg) +Methods Budathoki CITE DCI KCD iSCAN (ours) SCORE UT-IGSP + +![](images/6054045273355d4b028c261fa4991cd7be8dc8f35142272dc7b76373298998b5.jpg) + +![](images/3220000933c335de424137fbb762c330f4b24482a5d793b5e82a44822b34a080.jpg) +Figure 15: Shifted nodes detection performance in ER6/SF6. See App. C.2 for experimental details. iSCAN consistently outperformed baselines in terms of F1 score, precision, and recall. + +![](images/3a454d0f07405f9b2d97519e109b11abe5868b75d92ada1a91caaed3b8b2c018.jpg) +Figure 16: Difference DAG recovery performance in all different graphs. iSCAN-FOCI (green) consistently outperformed DCI (red) in terms of F1 score. + +# C.3 Performance of Alg. 3 using the elbow method + +In this section, we aim to understand the performance of our method when using the elbow approach discussed in Remark 6, random functions for shifted nodes, and different noise variances per variable within an environment. + +Data generation process. We first sampled a Directed Acyclic Graph (DAG) according to either the Erdős-Rényi (ER) model or the Scale-Free (SF) model for environment $\mathcal{E}_1$ . + +For environment $\mathcal{E}_2$ , we used the same DAG structure as in environment $\mathcal{E}_1$ , ensuring a direct comparison between the two environments. To introduce artificial shifted edges, we randomly selected $0.2 \cdot d$ nodes from the non-root nodes, where $d$ represents the total number of nodes in the DAG. These selected nodes correspond to shifted nodes, denoted as $S$ , with $|S| = 0.2d$ . For each shifted node $X_j \in S$ , we uniformly and randomly deleted 3 edges originating from its parents for environment $\mathcal{E}_2$ . The parent nodes whose edges to $X_j$ were deleted are denoted as $D_j$ . + +The functional relationship between shifted node $X_{j}$ and its parents $D_{j}$ in environment $\mathcal{E}_1$ was defined as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j}, i \notin D _ {j}} \sin (X _ {i} ^ {2}) + \sum_ {i \in D _ {j}} c _ {i j} \cdot f _ {i j} (- 2 X _ {i} ^ {3} + 3 X _ {i} ^ {2} + 4 X _ {i}) + N _ {j}, +$$ + +where $c_{ij} \sim \mathrm{Uniform}([ -5, -2] \cup [2, 5])$ , and $f_{ij}$ is a function from $\{\mathrm{sinc}(\cdot), \cos(\cdot)\}$ chosen uniformly at random. For environment $\mathcal{E}_2$ , where the adjacency matrix has undergone deletions, we defined the functional relationship between each node and its parents as follows: + +$$ +X _ {j} = \sum_ {i \in \mathrm {P A} _ {j}} \sin (X _ {i} ^ {2}) + N _ {j} +$$ + +Experiment detail. In each simulation, we generated $\{500, 1000\}$ data points, with the variances of the noises set uniformly at random in $[0.25, 0.5]$ . We tested three types of noise distributions, namely, the Normal, Laplace, and Gumbel distributions. We conducted a 100 simulations for each combination of graph type, noise type, and number of nodes. The running time was recorded by executing the experiments on an Intel Xeon Gold 6248R Processor with 8 cores. For our method, we used the hyperparameter $\eta = 0.001$ . Different from the hard threshold of $t = 2$ used in previous experiments, we now used the elbow approach to determine the set of shifted nodes. To automatically select the elbow we made use of the Python package Kneed7, with hyperparameters $\text{curve} = \text{'convex'}$ , direction='decreasing', online=online, interp_method='interppld' + +Evaluation. In this experiment, we assessed the performance of our method in two aspects: detecting shifted nodes and recovering the difference DAG. For the evaluation of shifted node detection, we measured F1 score, recall, and precision. In the evaluation of difference DAG recovery, we compared the estimated difference DAG with the ground truth difference DAG using F1 score. + +In Figures 17 and 18 we present the performances when using the elbow approach discussed in Remark 6. In Figure 17, we note that iSCAN performs similarly for number of samples 500 and 1000. We also show the top-1, top-2, and top-k precision of iSCAN when choosing the first, first 2, and first k variables of stats (see Algorithm 3) after sorting in decreasing order, respectively. We remark that the superbly performance of iSCAN in top-1 or top-2 precision suggests that in situations that is difficult to choose a threshold for Algorithm 3, the practioner can consider that the first or first two variables of stats are more likely to be shifted nodes. Finally, in Figure 18 we show that iSCAN outperforms DCI in recovering the underlying structural difference. + +![](images/d7125a646bad09604a393a55adac4ee162e8870a9c93973ba7ddafe47240df61.jpg) + +![](images/bf4f5c5ee579be18467c8e18c83f23eeda742399efa916711a5adf0eabdacba2.jpg) +(%) + +![](images/55f6bc3923b1c7a23874c8b842f87268d3ba42de081d3ed30c30c7624ea73d25.jpg) + +Figure 17: Shifted nodes detection performance in ERk and SFk for $k \in \{2,4,6\}$ . For each point in each subplot, we conducted 100 simulations as described in Section C.3. The points indicate the average values obtained from these simulations. The error bars depict the standard errors. Our method iSCAN (green) consistently outperformed DCI (red) in terms of F1 score, precision, and recall. +![](images/7c7315f8b588c30d0ce5bb0bb1b480b4a895f0f095c57c0c0fc341bf554955a0.jpg) +mh 500 1000 Methods DCI iSCAN (ours) + +![](images/f36839b7b07d7e5c662102d0f96f85ddfc663ddc37996aac83631173d09f48e0.jpg) + +![](images/7dd78fd6ffc5c5c940c2c6301f74e79f24221b7434eb9051b6aa246e77de3e32.jpg) + +Figure 18: Difference DAG recovery performance in all different graphs. For each point in each subplot, we conducted 100 simulations as described in Section C.3. The points indicate the average values obtained from these simulations. The error bars depict the standard errors. Our method iSCAN with FOCI (green) consistently outperformed DCI (red) in terms of F1 score. +![](images/176de3b2c2e067bc6e892445795d9da5b550867cc48ee31f0b38aa611181a5e0.jpg) +Methods DCI iSCAN +FOCI m500 1000 + +![](images/03a861377535700efb603cb2e6139f13052975bc1280b14a59dcce1685a3e824.jpg) + +![](images/28f2de2ca1f03e822980f2859e9f020534a5ef903dfc4be5de9e1cee5abe723a.jpg) + +# D Additional Discussion on Shifted Edges + +In Section 4, we focused on estimating structural changes across the environments (Definition 4). However, in some situations it might be of interest to determine whether the functional relationship between two variables has changed across the environments. The latter could have multiple interpretations, in this section, we elaborate on a particular type of functional change via partial derivatives. + +Definition 5 (functionally shifted edge). Given environments $\mathcal{E} = (X, f^h, \mathbb{P}_N^h)$ for $h \in [H]$ , an edge $(X_i \to X_j)$ is called a functionally shifted edge if there exists $h, h' \in [H]$ such that: + +$$ +\frac {\partial}{\partial x _ {i}} f _ {j} ^ {h} \left(\mathrm {P A} _ {j} ^ {h}\right) \neq \frac {\partial}{\partial x _ {i}} f _ {j} ^ {h ^ {\prime}} \left(\mathrm {P A} _ {j} ^ {h ^ {\prime}}\right). +$$ + +Without further assumptions about the functional form of $f_{j}^{h}$ , certain ill-posed situations may arise under Definition 5. Let us consider the following example. + +Example 1. Let $\mathcal{E}_A$ and $\mathcal{E}_B$ be two environments, each consisting of three nodes. Let the structural equations for node $X_3$ be: $X_3^A = \exp (X_1^A +X_2^A) + N_3$ , and $X_{3}^{B} = \exp (2\cdot X_{1}^{B} + X_{2}^{B}) + N_{3}$ . In this scenario, one could consider that the causal relationship $X_{2}\rightarrow X_{3}$ has not changed. However, we note that $\frac{\partial f_3^A}{\partial x_2^A}\neq \frac{\partial f_3^B}{\partial x_2^B}$ , thus, testing for changes in the partial derivative would yield a false discovery for the non-shifted edge $X_{2}\to X_{3}$ . + +Ill-posed situations such as the above example can be avoided by additional assumptions on the functional mechanisms. We next discuss a sufficient condition where the partial derivative test for functional changes is well-defined. + +Assumption D (Additive Models). Let $S$ be the set of shifted nodes across all the $H$ environments. Then, for all $j \in S, h \in [H]$ : + +$$ +f _ {j} ^ {h} (\mathrm {P A} _ {j} ^ {h}) = a _ {j} ^ {h} + \sum_ {k \in \mathrm {P A} _ {j} ^ {h}} f _ {j k} ^ {h} (X _ {k}), +$$ + +where $a_j^h$ is a constant, $f_{jk}^h$ is a nonlinear function, where $f_{jk}^h(\cdot)$ lies in some space of function class $\mathcal{F}$ . + +Remark 7. Assumption $D$ amounts to modelling each variable as a generalized linear model [27]. It is widely used in nonparametrics and causal discovery [12, 43, 80]. Moreover, it not only provides a practical framework but also makes the definition of shifted edges (as per Definition 5) well-defined and reasonable. + +Remark 8. Note that Assumption $D$ makes assumptions only on the set of shifted nodes. This is because the set of invariant nodes can be identified regardless of the their type of structural equation, and it is also clear these nodes cannot have any type of shift. + +Now consider a function class $\mathcal{F}$ , which incorporates the use of basis functions to model the additive components $f_{jk}^{h}$ . Specifically, we express $f_{jk}^{h}(x_{k}) = \Psi_{jk}^{h}(x_{k})\beta_{jk}^{h}$ , where feature mapping $\Psi_{jk}^{h}$ is a $1\times r$ matrix whose columns represent the basis functions and $\beta_{jk}^{h}$ is an $r$ -dimensional vector containing the corresponding coefficients. Moreover, we assume that the functions $f_{jk}^{1},\ldots ,f_{jk}^{H}$ share a same feature mapping $\Psi_{jk}^{1}(\cdot) = ,\dots , = \Psi_{jk}^{H}(\cdot)$ but can have different coefficients $\beta_{jk}^{h}$ across the $H$ environments. The latter has been assumed in prior work, e.g., [44]. The approach of using a basis function approximation is widely adopted in nonparametric analysis, and it has been successfully employed in various domains such as graph-based methods [80], and the popular CAM framework [12]. Then, under Assumption D and Definition 5, we present the following proposition: + +Proposition 2. Under Assumption $D$ , an edge $(X_{i}\to X_{j})$ is a functionally shifted edge, as in Definition 5, if and only if the basis coefficients are different. That is, + +$$ +\frac {\partial f _ {j} ^ {h}}{\partial x _ {i}} \neq \frac {\partial f _ {j} ^ {h ^ {\prime}}}{\partial x _ {i}} \Longleftrightarrow \beta_ {j i} ^ {h} \neq \beta_ {j i} ^ {h ^ {\prime}}. +$$ + +Proof. We have, + +$$ +\frac {\partial f _ {j} ^ {h}}{\partial x _ {i}} = \frac {\mathrm {d} f _ {j i} ^ {h} (x _ {i})}{\mathrm {d} x _ {i}} = \frac {\mathrm {d} (\Psi_ {j i} (x _ {i}) \beta_ {j i} ^ {h})}{\mathrm {d} x _ {i}} = \frac {\mathrm {d} \Psi_ {j i} (x _ {i})}{\mathrm {d} x _ {i}} \beta_ {j i} ^ {h}. +$$ + +Then, + +$$ +\frac {\partial f _ {j} ^ {h}}{\partial x _ {i}} - \frac {\partial f _ {j} ^ {h ^ {\prime}}}{\partial x _ {i}} = \frac {\mathrm {d} \Psi_ {j i} \left(x _ {i}\right)}{\mathrm {d} x _ {i}} \left(\beta_ {j i} ^ {h} - \beta_ {j i} ^ {h ^ {\prime}}\right) \neq \mathbf {0} \Longleftrightarrow \beta_ {j i} ^ {h} - \beta_ {j i} ^ {h ^ {\prime}} \neq \mathbf {0} +$$ + +The last $\Longleftrightarrow$ relation is due to the linear independence of the basis functions $\Psi_{ji}$ , then the null space of $\frac{\mathrm{d}\Psi_{ji}}{\mathrm{d}x_i}$ can only be the zero vector $\mathbf{0}$ . + +Note that the output of Algorithm 1 also estimates a topological order $\hat{\pi}$ . However, the exact parents of a node $X_{j}$ across the environments are not known, and they are possibly different. To estimate the coefficients without knowledge of the exact parents, we can consider the set $\widehat{\operatorname{Pre}(X_j)}$ , which consists of nodes located before $X_{j}$ in the topological order $\hat{\pi}$ . By regressing $X_{j}$ on $\widehat{\operatorname{Pre}(X_j)}$ for each environment, we can obtain coefficient estimations, which are the same coefficients obtained by regressing $X_{j}$ on its exact parents, in large samples. + +Theorem 3. In large samples, let $\{\tilde{\beta}_{jk}^h\}_{k\in \mathrm{Pre}(X_j)}$ be the coefficients obtained by regressing $X_{j}$ on the feature mapping of $\operatorname {Pre}(X_j)$ , and let $\{\beta_{jk}^{h}\}_{k\in \mathrm{PA}_{j}}$ be the coefficients obtained by regressing $X_{j}$ on the feature mapping of $\mathrm{PA}_j$ . Then, $\tilde{\beta}_{jk}^{h} = \beta_{jk}^{h}$ if $k\in \mathrm{PA}_j^h$ , and $\tilde{\beta}_{jk}^{h} = 0$ if $k\in \operatorname {Pre}(X_j)\setminus \mathrm{PA}_j^h$ . + +Proof. Proof can be found in Appendix B.3. + +![](images/28528cdb818ede107e106d4924fd6d2aac5c6077db84baf0f3de34f97fd94557.jpg) + +Motivated by Theorem 3, and given an estimated $\{\tilde{\beta}_{jk}^h\}_{k\in \widehat{\mathrm{Pre}} (X_j)}$ , one could conduct a hypothesis testing as follows: + +$$ +H _ {0}: \tilde {\beta} _ {j k} ^ {1} = \dots = \tilde {\beta} _ {j k} ^ {H} \tag {13} +$$ + +If the null hypothesis $H_0$ is rejected, it indicates that there is evidence of a functionally shifted edge between nodes $X_{k}$ and $X_{j}$ across the environments. In this paper we leave the hypothesis test unspecified to allow for any procedure that can test eq.(13). + +# Algorithm 5 Functionally shifted edges detection + +Input: Sample data $X^1, \ldots, X^H$ , shifted nodes set $\widehat{S}$ , topological order $\hat{\pi}$ , significance level $\alpha$ . + +Output: Set of functionally shifted edges $\widehat{E}$ + +1: for $j\in \widehat{S}$ do +2: Estimate $\tilde{\beta}_{jk}^{h}$ for all $k\in \widehat{\mathrm{Pre}} (X_j)$ and $h\in [H]$ +3: for $k \in \widetilde{\mathrm{Pre}}(X_j)$ do +4: Conduct hypothesis testing $H_0$ (equation 13) under significant level $\alpha$ . +5: If $H_0$ is rejected, add edge $(X_k \to X_j)$ to $\widehat{E}$ . \ No newline at end of file diff --git a/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/images.zip b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4cfacf8e77d223a1cedcbbd30bcfb6fb9d24d404 --- /dev/null +++ b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdf581681dd78a590c739e041856beffe369f9ea3d270111be4c8434dd3e744d +size 2656618 diff --git a/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/layout.json b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f1ef6f8f47b0178638f6a74bbf406cfc740fac0d --- /dev/null +++ b/iscanidentifyingcausalmechanismshiftsamongnonlinearadditivenoisemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ce6ba9ce747a79db1809ce887ddc63873b9bc9e36c16075f6d0016216390288 +size 1428199 diff --git a/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_content_list.json b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..11f60e4a273e5d03af4ee571349e7c0fc80f9831 --- /dev/null +++ b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec9ddf1bf445cf618a6620a533b2f22a64cadcd2aa9226ca8bcf73ea2d0e6370 +size 132107 diff --git a/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_model.json b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a103691129f39f525dfb201056dcab6b839a7e8c --- /dev/null +++ b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:584c4f853dd7a41ecc3ded17df48c2cf9160100ca5afcbdc031ccfb81a5bbae0 +size 158892 diff --git a/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_origin.pdf b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4e24c3f4efc9646db16a960125a9e5a5f1537df7 --- /dev/null +++ b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/2dd01f7a-eb35-4d5f-8538-975c25eab59c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:782f2ec970d3e1b81475ef67b947913c5b622c7c642c94b8968c06c1ae24d3ff +size 720966 diff --git a/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/full.md b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a662344575116d7f3cf5630b8890703dff93f4da --- /dev/null +++ b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/full.md @@ -0,0 +1,669 @@ +# $k$ -Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy + +Chenglin Fan, Ping Li, Xiaoyun Li + +Cognitive Computing Lab + +Baidu Research + +10900 NE 8th St. Bellevue, WA 98004, USA + +{fanchenglin, pingli98, lixiaoyun996}@gmail.com + +# Abstract + +In clustering, the choice of initial centers is crucial for the convergence speed of the algorithms. We propose a new initialization scheme for the $k$ -median problem in the general metric space (e.g., discrete space induced by graphs), based on the construction of metric embedding tree structure of the data. We propose a novel and efficient search algorithm which finds initial centers that can be used subsequently for the local search algorithm. The so-called HST initialization method can produce initial centers achieving lower error than those from another popular method $k$ -median++, also with higher efficiency when $k$ is not too small. Our HST initialization are then extended to the setting of differential privacy (DP) to generate private initial centers. We show that the error of applying DP local search followed by our private HST initialization improves prior results on the approximation error, and approaches the lower bound within a small factor. Experiments demonstrate the effectiveness of our proposed methods. + +# 1 Introduction + +Clustering is an important classic problem in unsupervised learning that has been widely studied in statistics, data mining, machine learning, network analysis, etc. (Punj and Stewart, 1983; Dhillon and Modha, 2001; Banerjee et al., 2005; Berkhin, 2006; Abbasi and Younis, 2007). The objective of clustering is to divide a set of data points into clusters, such that items within the same cluster exhibit similarities, while those in different clusters distinctly differ. This is concretely measured by the sum of distances (or squared distances) between each point to its nearest cluster center. One conventional notion to evaluate a clustering algorithms is: with high probability, $cost(C, D) \leq \gamma OPT_k(D) + \xi$ , where $C$ is the centers output by the algorithm and $cost(C, D)$ is a cost function defined for $C$ on dataset $D$ . $OPT_k(D)$ is the cost of optimal clustering solution on $D$ . When everything is clear from context, we will use $OPT$ for short. Here, $\gamma$ is called multiplicative error and $\xi$ is called additive error. Alternatively, we may also use the notion of expected cost. + +Two popularly studied clustering problems are 1) the $k$ -median problem, and 2) the $k$ -means problem. The origin of $k$ -median dates back to the 1970's (e.g., Kaufman et al. (1977)), where one tries to find the best location of facilities that minimizes the cost measured by the distance between clients and facilities. Formally, given a set of points $D$ and a distance measure, the goal is to find $k$ center points minimizing the sum of absolute distances of each sample point to its nearest center. In $k$ -means, the objective is to minimize the sum of squared distances instead. There are two general frameworks for clustering. One heuristic is the Lloyd's algorithm (Lloyd, 1982), which is built upon an iterative distortion minimization approach. In most cases, this method can only be applied to numerical data, + +typically in the (continuous) Euclidean space. Clustering in general metric spaces (discrete spaces) is also important and useful when dealing with, for example, the graph data, where Lloyd's method is no longer applicable. A more generally applicable approach, the local search method (Kanungo et al., 2002; Arya et al., 2004), has also been widely studied. It iteratively finds the optimal swap between the center set and non-center data points to keep lowering the cost. Local search can achieve a constant approximation (i.e., $\gamma = O(1)$ ) to the optimal solution (Arya et al., 2004). For general metric spaces, the state of the art approximation ratio is 2.675 for $k$ -median Byrka et al. (2015) and 6.357 for $k$ -means Ahmadian et al. (2017). + +Initialization of cluster centers. It is well-known that the performance of clustering can be highly sensitive to initialization. If clustering starts with good initial centers with small approximation error, the algorithm may use fewer iterations to find a better solution. The $k$ -median++ algorithm (Arthur and Vassilvitskii, 2007) iteratively selects $k$ data points as initial centers, favoring distant points in a probabilistic way, such that the initial centers tend to be well spread over the data points (i.e., over different clusters). The produced initial centers are proved to have $O(\log k)$ multiplicative error. Follow-up works further improved its efficiency and scalability, e.g., Bahmani et al. (2012); Bachem et al. (2016); Lattanzi and Sohler (2019); Choo et al. (2020); Cohen-Addad et al. (2021); Grunau et al. (2023); Fan et al. (2023). In this work, we propose a new initialization framework, called HST initialization, which is built upon a novel search algorithm on metric embedding trees constructed from the data. Our method achieves improved approximation error compared with $k$ -median++. Moreover, importantly, our initialization scheme can be conveniently combined with the notion of differential privacy (DP) to protect the data privacy. + +Clustering with Differential Privacy. The concept of differential privacy (Dwork, 2006; McSherry and Talwar, 2007) has been popular to rigorously define and resolve the problem of keeping useful information for machine learning models, while protecting privacy for each individual. DP has been adopted to a variety of algorithms and tasks, such as regression, classification, principle component analysis, graph distance release, matrix completion, optimization, and deep learning (Chaudhuri and Monteleoni, 2008; Chaudhuri et al., 2011; Abadi et al., 2016; Ge et al., 2018; Wei et al., 2020; Dong et al., 2022; Fan and Li, 2022; Fan et al., 2022; Fang et al., 2023; Li and Li, 2023a,b). Private $k$ -means clustering has also been widely studied, e.g., Feldman et al. (2009); Nock et al. (2016); Feldman et al. (2017), mostly in the continuous Euclidean space. Balcan et al. (2017) considered identifying a good candidate set (in a private manner) of centers before applying private local search, which yields $O(\log^3 n)$ multiplicative error and $O((k^2 + d)\log^5 n)$ additive error. Later on, the private Euclidean $k$ -means error axis further improved by Stemmer and Kaplan (2018), with more advanced candidate set selection. Huang and Liu (2018) gave an optimal algorithm in terms of minimizing Wasserstein distance under some data separability condition. + +For private $k$ -median clustering, Feldman et al. (2009); Ghazi et al. (2020) considered the problem in high dimensional Euclidean space. However, it is rather difficult to extend their analysis to more general metrics in discrete spaces (e.g., on graphs). The strategy of Balcan et al. (2017) to form a candidate center set could as well be adopted to $k$ -median, which leads to $O(\log^{3/2} n)$ multiplicative error and $O((k^2 + d) \log^3 n)$ additive error in the Euclidean space where $n$ is the sample size. In discrete space, Gupta et al. (2010) proposed a private method for the classical local search heuristic, which applies to both $k$ -medians and $k$ -means. To cast privacy on each swapping step, the authors applied the exponential mechanism of McSherry and Talwar (2007). Their method produced an $\epsilon$ -differentially private solution with cost $6OPT + O(\triangle k^2 \log^2 n / \epsilon)$ , where $\triangle$ is the diameter of the point set. In this work, we will show that our proposed HST initialization can improve the DP local search for $k$ -median of Gupta et al. (2010) in terms of both approximation error and efficiency. Stemmer and Kaplan (2018); Jones et al. (2021) proposed $(\epsilon, \delta)$ -differentially private solution also with constant multiplicative error but smaller additive error. + +The main contributions of this work include the following: + +- We introduce the Hierarchically Well-Separated Tree (HST) as an initialization tool for the $k$ -median clustering problem. We design an efficient sampling strategy to select the initial center set from the tree, with an approximation factor $O(\log \min \{k,\triangle \})$ in the non-private setting, which is $O(\log \min \{k,d\})$ when $\log \triangle = O(\log d)$ . This improves the $O(\log k)$ error of $k$ -median++. Moreover, the complexity of our HST based method can be smaller than that of $k$ -median++. When the number of clusters $k$ is not too small ( $k\geq \log n$ ), which is a common scenario in practical applications. + +- We propose a differentially private version of HST initialization under the setting of Gupta et al. (2010) in discrete metric space. The so-called DP-HST algorithm finds initial centers with $O(\log n)$ multiplicative error and $O(\epsilon^{-1}\triangle k^2\log^2 n)$ additive error. Moreover, running DP-local search starting from this initialization gives $O(1)$ multiplicative error and $O(\epsilon^{-1}\triangle k^2 (\log \log n)\log n)$ additive error, which improves previous results towards the well-known lower bound $O(\epsilon^{-1}\triangle k\log (n / k))$ on the additive error of DP $k$ -median (Gupta et al., 2010) within a small $O(k\log \log n)$ factor. This is the first clustering initialization method with $\epsilon$ -differential privacy guarantee and improved error rate in general metric space. +- We conduct experiments on simulated and real-world datasets to demonstrate the effectiveness of our methods. In both non-private and private settings, our proposed HST-based approach achieves smaller cost at initialization than $k$ -median++, which may also lead to improvements in the final clustering quality. + +# 2 Background and Setup + +The definition of differential privacy (DP) is as follows. + +Definition 2.1 (Differential Privacy (DP) (Dwork, 2006)). If for any two adjacent datasets $D$ and $D'$ with symmetric difference of size one and any $O \subset \operatorname{Range}(\mathbb{A})$ , an algorithm $\mathbb{A}$ with map $f$ satisfies + +$$ +P r [ \mathbb {A} (D) \in O ] \leq e ^ {\epsilon} P r [ \mathbb {A} (D ^ {\prime}) \in O ], +$$ + +then algorithm $\mathbb{A}$ is said to be $\epsilon$ -differentially private ( $\epsilon$ -DP). + +Intuitively, DP requires that after removing any data point from $D$ (e.g., a node in a graph), the output of $D'$ should not be too different from that of the original dataset $D$ . The Laplace mechanism adds Laplace $(\eta(f)/\epsilon)$ noise to the output where $\eta(f) = \sup_{|D - D'| = 1} |f(D) - f(D')|$ is the sensitivity of $f$ , which is known to achieve $\epsilon$ -DP. The exponential mechanism is also a tool for many DP algorithms with discrete outputs. Let $O$ be the output domain. The utility function $q: D \times O \to \mathbb{R}$ is what we aim to maximize. The exponential mechanism outputs an element $o \in O$ with probability $P[\mathbb{A}(D) = o] \propto \exp(\frac{\epsilon q(D, o)}{2\eta(q)})$ . Both mechanisms will be used in our paper. + +# 2.1 $k$ -Median Clustering and Local Search + +In this paper, we follow the classic problem setting in the metric clustering literature, e.g. Arya et al. (2004); Gupta et al. (2010). Specifically, the definitions of metric $k$ -median clustering problem (DP and non-DP) are stated as follow. + +Definition 2.2 ( $k$ -median). Given a universe point set $U$ and a metric $\rho: U \times U \to \mathbb{R}$ , the goal of $k$ -median to pick $F \subseteq U$ with $|F| = k$ to minimize + +$$ +k \text {m e d i a n :} \quad \operatorname {c o s t} _ {k} (F, U) = \sum_ {v \in U} \min _ {f \in F} \rho (v, f). \tag {1} +$$ + +Let $D \subseteq U$ be a set of "demand points". The goal of DP $k$ -median is to minimize + +$$ +D P k \text {m e d i a n}: \quad \operatorname {c o s t} _ {k} (F, D) = \sum_ {v \in D} \min _ {f \in F} \rho (v, f), \tag {2} +$$ + +and the output $F$ is required to be $\epsilon$ -differentially private with respect to $D$ . We may drop "F" and use "cost $_k(U)$ " or "cost $_k(D)$ " if there is no risk of ambiguity. + +Note that in Definition 2.2, our aim is to protect the privacy of a subset $D \subset U$ . To better understand the motivation and application scenario, we provide a real-world example as below. + +Example 2.3. Consider $U$ to be the universe of all users in a social network (e.g., Facebook, LinkedIn, etc.). Each user (account) has some public information (e.g., name, gender, interests, etc.), but also has some private personal data that can only be seen by the data server. Let $D$ be a set of users grouped by some feature that might be set as private. Suppose a third party plans to collaborate with the most influential users in $D$ for e.g., commercial purposes, thus requesting the cluster centers of $D$ . In this case, we need a differentially private algorithm to safely release the centers, while protecting the individuals in $D$ from being identified (since the membership of $D$ is private). + +Algorithm 1: Local search for $k$ -median clustering (Arya et al., 2004) +Input: Data points $U$ , parameter $k$ , constant $\alpha$ +Initialization: Randomly select $k$ points from $U$ as initial center set $F$ +while $\exists x\in F,y\in U$ s.t. cost $(F - \{x\} +\{y\})\leq (1 - \alpha /k)$ cost $(F)$ do Select $(x,y)\in F_i\times (D\setminus F_i)$ with arg $\min_{x,y}\{\mathrm{cost}(F - \{x\} +\{y\})\}$ Swap operation: $F\gets F - \{x\} +\{y\}$ +Output: Center set $F$ + +The (non-private) local search procedure for $k$ -median proposed by Arya et al. (2004) is summarized in Algorithm 1. First, we randomly pick $k$ points in $U$ as the initial centers. In each iteration, we search over all $x \in F$ and $y \in U$ , and do the swap $F \gets F - \{x\} + \{y\}$ such that the new centers improve the cost the most, and if the improvement is more than $(1 - \alpha / k)$ for some $\alpha > 0^2$ . We repeat the procedure until no such swap exists. Arya et al. (2004) showed that the output center set $F$ achieves 5 approximation error to the optimal solution, i.e., $cost(F) \leq 5OPT$ . + +# 2.2 $k$ -median++ Initialization + +Although local search is able to find a solution with constant error, it takes $O(n^{2})$ per iteration (Re-sende and Werneck, 2007) in expected $O(k\log n)$ steps (which gives total complexity $O(kn^{2}\log n)$ ) when started from a random center set, which would be slow for large datasets. Indeed, we do not need such complicated/meticulous algorithm to reduce the cost at the beginning, i.e., when the cost is large. To accelerate the process, efficient initialization methods find a "roughly" good center set as the starting point for local search. In the paper, we compare our new initialization scheme mainly with a popular (and perhaps the most well-known) initialization method, the $k$ -median++ (Arthur and Vassilitskii, 2007)3 as presented in Algorithm 2. The output centers $C$ by $k$ -median++ achieve $O(\log k)$ approximation error with time complexity $O(nk)$ . Starting from the initialization, we only need to run $O(k\log \log k)$ steps of the computationally heavy local search to reach a constant error solution. Thus, initialization may greatly improve the clustering efficiency. + +Algorithm 2: $k$ -median++ initialization (Arthur and Vassilvitskii, 2007) +Input: Data points $U$ , number of centers $k$ Randomly pick a point $c_{1}\in U$ and set $F = \{c_1\}$ for $i = 2,\dots,k$ do Select $c_{i} = u\in U$ with probability $\frac{\rho(u,F)}{\sum_{u^{\prime}\in U}\rho(u^{\prime},F)}$ $F = F\cup \{c_i\}$ Output: $k$ -median++ initial center set $F$ + +# 3 Initialization via Hierarchical Well-Separated Tree (HST) + +In this section, we propose our new initialization scheme for $k$ -median clustering, and provide our analysis in the non-private case solving (1). The idea is based on the metric embedding theory. We will start with an introduction to the main tool used in our approach. + +# 3.1 Hierarchically Well-Separated Tree (HST) + +In this paper, for an $L$ -level tree, we will count levels in a descending order down the tree. We use $h_v$ to denote the level of $v$ , and let $n_i$ be the number of nodes at level $i$ . The Hierarchically Well-Separated Tree (HST) is based on the padded decompositions of a general metric space in a hierarchical manner (Fakcharoenphol et al., 2004). Let $(U,\rho)$ be a metric space with $|U| = n$ , and we will refer to this metric space without specific clarification. A $\beta$ -padded decomposition of $U$ + +![](images/1087cef9e52419bcedb4ea64d12df934c9ce7c59e2d433eee2a11367f0999773.jpg) +Figure 1: An example of a 3-level padded decomposition and the corresponding 2-HST. Left: The thickness of the ball represents the level. The colors correspond to different levels in the HST in the right panel. “ $\triangle$ ”s are the center nodes of partitions (balls), and “ $\times$ ”s are the non-center data points. Right: The 2-HST generated from the padded decomposition. Bold indices represent the centers. + +![](images/61d4e39ea494bc74d44a7e253933ea0e62f2b51e105f25697192e4557da68fd1.jpg) + +is a probabilistic partition of $U$ such that the diameter of each cluster $U_{i} \in U$ is at most $\beta$ , i.e., $\rho(u, v) \leq \beta, \forall u, v \in U_{i}, i = 1, \dots, k$ . The formal definition of HST is given as below. + +Definition 3.1. Assume $\min_{u,v\in U}\rho (u,v) = 1$ and denote the diameter $\triangle = \max_{u,v\in U}\rho (u,v)$ . An $\alpha$ -Hierarchically Well-Separated Tree ( $\alpha$ -HST) with depth $L$ is an edge-weighted rooted tree $T$ , such that an edge between any pair of two nodes of level $i - 1$ and level $i$ has length at most $\triangle/\alpha^{L - i}$ . + +Our analysis will consider $\alpha = 2$ -HST for conciseness, since $\alpha$ only affects the constants in our theoretical analysis. Figure 1 is an example 2-HST (right panel) with $L = 3$ levels, along with its underlying padded decompositions (left panel). Using Algorithm 3, a 2-HST can be built as follows: we first find a padded decomposition $P_{L} = \{P_{L,1},\dots,P_{L,n_{L}}\}$ of $U$ with parameter $\beta = \triangle /2$ . The center of each partition in $P_{L,j}$ serves as a root node in level $L$ . Then, we re-do a padded decomposition for each partition $P_{L,j}$ , to find sub-partitions with diameter $\beta = \triangle /4$ , and set the corresponding centers as the nodes in level $L - 1$ , and so on. Each partition at level $i$ is obtained with $\beta = \triangle /2^{L - i}$ . This process proceeds until a node has a single point (leaf), or a pre-specified tree depth is reached. It is worth mentioning that, Blelloch et al. (2017) proposed an efficient HST construction in $O(m\log n)$ time, where $n$ and $m$ are the number of nodes and edges in a graph, respectively. Therefore, the construction of HST can be very efficient in practice. + +The first step of our method is to embed the data points into an HST (see Algorithm 4). Next, we will describe our new strategy to search for the initial centers on the tree (w.r.t. the tree metric). Before moving on, it is worth mentioning that, there are polynomial time algorithms for computing an exact $k$ -median solution in the tree metric (Tamir (1996); Shah (2003)). However, the dynamic programming algorithms have high complexity (e.g., $O(kn^2)$ ), making them unsuitable for the purpose of fast initialization. Moreover, it is unknown how to apply them effectively to the private case. The three key merits of our new algorithm are: (1) It is more efficient than $k$ -median++ when $k$ is not too small, which is a very common scenario in practice; (2) It achieves $O(1)$ approximation error in the tree metric; (3) It can be easily extended to incorporating differential privacy (DP). + +Algorithm 3: Build 2-HST(U,L) +Input: Data points $U$ with diameter $\triangle, L$ +Randomly pick a point in $U$ as the root node of $T$ +Let $r = \triangle / 2$ +Apply a permutation $\pi$ on $U$ // so points will be chosen in a random sequence for each $v \in U$ do +Set $C_v = [v]$ +for each $u \in U$ do +| Add $u \in U$ to $C_v$ if $d(v, u) \leq r$ and $u \notin \bigcup_{v' \neq v} C_{v'}$ +Set the non-empty clusters $C_v$ as the children nodes of $T$ +for each non-empty cluster $C_v$ do +| Run 2-HST $(C_v, L - 1)$ to extend the tree $T$ ; stop until $L$ levels or reaching a leaf node +Output: 2-HST $T$ + +# 3.2 HST Initialization Algorithm + +Let $L = \log \Delta$ and suppose $T$ is a level- $L$ 2-HST in $(U, \rho)$ , where we assume $L$ is an integer. For a node $v$ at level $i$ , we use $T(v)$ to denote the subtree rooted at $v$ . Let $N_v = |T(v)|$ be the number of data points in $T(v)$ . The search strategy for the initial centers, NDP-HST initialization ("NDP" stands for "Non-Differentially Private"), is presented in Algorithm 4 with two phases. + +Subtree search. The first step is to identify the subtrees that contain the $k$ centers. To begin with, $k$ initial centers $C_1$ are picked from $T$ who have the largest score $(v) = N(v)\cdot 2^{h_v}$ . This is intuitive, since to get a good clustering, we typically want the ball surrounding each center to include more data points. Next, we do a screening over $C_1$ : if there is any ancestor-descendant pair of nodes, we remove the ancestor from $C_1$ . If the current size of $C_1$ is smaller than $k$ , we repeat the process until $k$ centers are chosen (we do not re-select nodes in $C_1$ and their ancestors). This way, $C_1$ contains $k$ root nodes of $k$ disjoint subtrees. + +Algorithm 4: NDP-HST initialization +Input: $U,\triangle ,k$ +Initialization: $L = \log \triangle ,C_0 = \emptyset ,C_1 = \emptyset$ +Call Algorithm 3 to build a level- $L2\mathrm{-HST}T$ using $U$ +for each node $v$ in $T$ do $N_{v}\gets |U\cap T(v)|$ score $(v)\leftarrow N_v\cdot 2^{h_v}$ +while $|C_1| < k$ do Add top $(k - |C_1|)$ nodes with highest score to $C_1$ for each $v\in C_1$ do $C_1 = C_1\backslash \{v\}$ if $\exists v^{\prime}\in C_{1}$ such that $v^{\prime}$ is a descendant of $v$ $C_0 =$ FIND-LEAF(T,C1) +Output: Initial center set $C_0\subseteq U$ + +Leaf search. After we find $C_1$ the set of $k$ subtrees, the next step is to find the center in each subtree using Algorithm 5 ("FIND-LEAF"). We employ a greedy search strategy, by finding the child node with largest score level by level, until a leaf is found. This approach is intuitive since the diameter of the partition ball exponentially decays with the level. Therefore, we are in a sense focusing more and more on the region with higher density (i.e., with more data points). + +The complexity of our search algorithm is given as follows. All proofs are placed in Appendix B. + +Proposition 3.2 (Complexity). Algorithm 4 takes $O(dn\log n)$ time in the Euclidean space. + +Remark 3.3 (Comparison with $k$ -median++.) The complexity of $k$ -median++ is $O(dnk)$ in the Euclidean space (Arthur and Vassilvitskii, 2007). Our algorithm would be faster when $k > \log n$ , which is a common scenario. Similar comparison also holds for general metrics. + +# 3.3 Approximation Error of HST Initialization + +We provide the error analysis of our algorithm. Firstly, we show that the initial center set produced by NDP-HST is already a good approximation to the optimal $k$ -median solution. Let $\rho^T(x,y) = d_T(x,y)$ denote the "2-HST metric" between $x$ and $y$ in the 2-HST $T$ , where $d_T(x,y)$ is the tree distance between nodes $x$ and $y$ in $T$ . By Definition 3.1 and since $\triangle = 2^L$ , in the analysis we assume + +Algorithm 5: FIND-LEAF $(T,C_1)$ +Input: $T,C_1$ +Initialization: $C_0 = \emptyset$ +for each node $v$ in $C_1$ do while $v$ is not a leaf node do $\begin{array}{rl} & v\gets \arg_{w}\max \{N_{w},w\in ch(v)\} ,\mathrm{where~}ch(v)\mathrm{~denotes~the~children~nodes~of~}v\\ & \mathrm{Add~}v\mathrm{~to~}C_{0} \end{array}$ +Output: Initial center set $C_0\subseteq U$ + +equivalently that the edge weight of the $i$ -th level is $2^{i - 1}$ . The crucial step of our analysis is to examine the approximation error in terms of the 2-HST metric, after which the error can be adapted to the general metrics by the following qwll-known result. + +Lemma 3.4 (Bartal (1996)). In a metric space $(U,\rho)$ with $|U| = n$ and diameter $\triangle$ , it holds that $\forall x,y\in U$ , $E[\rho^T (x,y)] = O(\min \{\log n,\log \triangle \})\rho (x,y)$ . In the Euclidean space $\mathbb{R}^d$ , $E[\rho^T (x,y)] = O(d)\rho (x,y)$ , $\forall x,y\in U$ . + +Recall $C_0, C_1$ from Algorithm 4. We define + +$$ +\operatorname {c o s t} _ {k} ^ {T} (U) = \sum_ {y \in U} \min _ {x \in C _ {0}} \rho^ {T} (x, y), \tag {3} +$$ + +$$ +\operatorname {c o s t} _ {k} ^ {T ^ {\prime}} (U, C _ {1}) = \min _ {| F \cap T (v) | = 1, \atop \forall v \in C _ {1}} \sum_ {y \in U} \min _ {x \in F} \rho^ {T} (x, y), \tag {4} +$$ + +$$ +O P T _ {k} ^ {T} (U) = \min _ {F \subset U, | F | = k} \sum_ {y \in U} \min _ {x \in F} \rho^ {T} (x, y) \equiv \min _ {C _ {1} ^ {\prime}} c o s t _ {k} ^ {T \prime} (U, C _ {1} ^ {\prime}). \tag {5} +$$ + +For simplicity, we will use $cost_k^{T'}(U)$ to denote $cost_k^{T'}(U, C_1)$ . Here, $OPT_k^T$ (5) is the cost of the global optimal solution with the 2-HST metric. The last equivalence in (5) holds because the optimal centers can always be located in $k$ disjoint subtrees, as each leaf only contains one point. (3) is the $k$ -median cost with 2-HST metric of the output $C_0$ of Algorithm 4. (4) is the optimal cost after the subtrees are chosen. That is, it represents the minimal cost to pick one center from each subtree in $C_1$ . We first bound the error of the subtree search step and the leaf search step, respectively. + +Lemma 3.5 (Subtree search). $cost_{k}^{T^{\prime}}(U)\leq 5OPT_{k}^{T}(U)$ + +Lemma 3.6 (Leaf search). $\text{cost}_k^T(U) \leq 2\text{cost}_k^{T'}(U)$ . + +Combining Lemma 3.5 and Lemma 3.6, we obtain: + +Theorem 3.7 (2-HST error). Running Algorithm 4, we have $\text{cost}_k^T(U) \leq 10OPT_k^T(U)$ . + +Thus, HST-initialization produces an $O(1)$ approximation to the optimal cost in the 2-HST metric. Define $cost_{k}(U)$ as (1) for our HST centers, and the optimal cost w.r.t. $\rho$ as + +$$ +O P T _ {k} (U) = \min _ {| F | = k} \sum_ {y \in U} \min _ {x \in F} \rho (x, y). \tag {6} +$$ + +We have the following result based on Lemma 3.4. + +Theorem 3.8. In the general metric space, the expected $k$ -median cost of NDP-HST (Algorithm 4) is $E[\mathrm{cost}_k(U)] = O(\min\{\log n, \log \Delta\}) OPT_k(U)$ . + +Remark 3.9. In the Euclidean space, Makarychev et al. (2019) showed that using $O(\log k)$ random projections suffices for $k$ -median to achieve $O(1)$ error. Thus, if $\log \triangle = O(\log d)$ , by Lemma 3.4, HST initialization is able to achieve $O(\log (\min \{d,k\}))$ error, which is better than $O(\log k)$ of $k$ -median $++$ (Arthur and Vassilvitskii, 2007) when $d$ is small. + +NDP-HST Local Search. We are interested in the approximation quality of standard local search (Algorithm 1), when the initial centers are produced by our NDP-HST. + +Theorem 3.10. When initialized by NDP-HST, local search achieves $O(1)$ approximation error in expected $O(k\log \log \min \{n,\triangle \})$ number of iterations for input in general metric space. + +We remark that the initial centers found by NDP-HST can be used for $k$ -means clustering analogously. For general metrics, $E[\text{cost}_{km}(U)] = O(\min\{\log n, \log \Delta\})^2 OPT_{km}(U)$ where $\text{cost}_{km}(U)$ is the optimal $k$ -means cost. See Appendix C for more detaileds. + +# 4 HST Initialization with Differential Privacy + +In this section, we consider initialization and clustering with differential privacy (DP). Recall (2) that in this problem, $U$ is the universe of data points, and $D \subset U$ is a demand set that needs to be clustered with privacy. Since $U$ is public, simply running initialization algorithms on $U$ would + +Algorithm 6: DP-HST initialization +Input: $U, D, \Delta, k, \epsilon$ +Build a level- $L$ 2-HST $T$ based on input $U$ +for each node $v$ in $T$ do + $\mid N_v \leftarrow |D \cap T(v)|$ , $\hat{N}_v \leftarrow N_v + \text{Lap}(2^{(L - h_v)} / \epsilon)$ , $\text{score}(v) \leftarrow \hat{N}(v) \cdot 2^{h_v}$ +Based on $\hat{N}_v$ , apply the same strategy as Algorithm 4: find $C_1$ ; $C_0 = \text{FIND-LEAF}(C_1)$ +Output: Private initial center set $C_0 \subseteq U$ + +Algorithm 7: DP-HST local search +Input: $U$ demand points $D\subseteq U$ , parameter $k,\epsilon ,T$ +Initialization: $F_{1}$ the private initial centers generated by Algorithm 6 with privacy $\epsilon /2$ +Set parameter $\epsilon^{\prime} = \frac{\epsilon}{4\Delta(T + 1)}$ +for $i = 1$ to $T$ do Select $(x,y)\in F_i\times (V\setminus F_i)$ with prob. proportional to $\exp (-\epsilon^{\prime}\times (cost(F_i - \{x\} +\{y\}))$ Let $F_{i + 1}\gets F_i - \{x\} +\{y\}$ +Select $j$ from $\{1,2,\dots,T + 1\}$ with probability proportional to $\exp (-\epsilon^{\prime}\times cost(F_j))$ +Output: $F = F_{j}$ the private center set + +preserve the privacy of $D$ . However, 1) this might be too expensive; 2) in many cases one would probably want to incorporate some information about $D$ in the initialization, since $D$ could be a very imbalanced subset of $U$ . For example, $D$ may only contain data points from one cluster, out of tens of clusters in $U$ . In this case, initialization on $U$ is likely to pick initial centers in multiple clusters, which would not be helpful for clustering on $D$ . + +Next, we show how our proposed HST initialization can be easily combined with differential privacy and at the same time contains useful information about the demand set $D$ , leading to improved approximation error (Theorem 4.3). Again, suppose $T$ is an $L = \log \triangle$ -level 2-HST of universe $U$ in a general metric space. Denote $N_v = |T(v) \cap D|$ for a node point $v$ . Our private HST initialization (DP-HST) is similar to the non-private Algorithm 4. To gain privacy, we perturb $N_v$ by adding i.i.d. Laplace noise: $\hat{N}_v = N_v + \text{Lap}(2^{(L - h_v)} / \epsilon)$ , where $\text{Lap}(2^{(L - h_v)} / \epsilon)$ is a Laplace random number with rate $2^{(L - h_v)} / \epsilon$ . We will use the perturbed $\hat{N}_v$ for node sampling instead of the true value $N_v$ , as described in Algorithm 6. The DP guarantee of this initialization scheme is straightforward by the composition theory (Dwork, 2006). + +Theorem 4.1. Algorithm 6 is $\epsilon$ -differentially private. + +Proof. For each level $i$ , the subtrees $T(v,i)$ are disjoint to each other. The privacy budget used in $i$ -th level is $\epsilon / 2^{(L - i)}$ , so by composition the total privacy budget is $\sum_{i}\epsilon / 2^{(L - i)} < \epsilon$ . + +Theorem 4.2. Algorithm 6 finds initial centers such that + +$$ +E \left[ \operatorname {c o s t} _ {k} (D) \right] = O (\log n) \left(O P T _ {k} (D) + k \epsilon^ {- 1} \triangle \log n\right). +$$ + +DP-HST Local Search. Similarly, we can use private HST initialization to improve the performance of private $k$ -median local search, which is presented in Algorithm 7. After DP-HST initialization, the DP local search procedure follows Gupta et al. (2010) using the exponential mechanism. + +Theorem 4.3. Algorithm 7 achieves $\epsilon$ -differential privacy. The output centers achieve $\mathrm{cost}_k(D) \leq 6OPT_k(D) + O(\epsilon^{-1}k^2\triangle (\log \log n)\log n)$ in $O(k\log \log n)$ iterations with probability $(1 - \frac{1}{poly(n)})$ . + +In prior literature, the DP local search with random initialization (Gupta et al., 2010) has 6 multiplicative error and $O(\epsilon^{-1} \triangle k^2 \log^2 n)$ additive error. Our result improves the log $n$ term to $\log \log n$ in the additive error. Meanwhile, the number of iterations needed is improved from $T = O(k \log n)$ to $O(k \log \log n)$ (see Appendix A for an empirical justification). Notably, it has been shown in Gupta et al. (2010) that for $k$ -median problem, the lower bounds on the multiplicative and additive error of any $\epsilon$ -DP algorithm are $O(1)$ and $O(\epsilon^{-1} \triangle k \log (n / k))$ , respectively. Our result matches the lower bound on the multiplicative error, and the additive error is only worse than the bound by a factor of $O(k \log \log n)$ . To our knowledge, Theorem 4.3 is the first result in the literature to improve the error of DP local search in general metric space. + +# 5 Numerical Results + +# 5.1 Datasets and Algorithms + +Discrete Euclidean space. Following previous work, we test $k$ -median clustering on the MNIST hand-written digit dataset (LeCun et al., 1998) with 10 natural clusters (digit 0 to 9). We set $U$ as 10000 randomly chosen data points. We choose the demand set $D$ using two strategies: 1) "balance", where we randomly choose 500 samples from $U$ ; 2) "imbalance", where $D$ contains 500 random samples from $U$ only from digit "0" and "8" (two clusters). We note that, the imbalanced $D$ is a very practical setting in real-world scenarios, where data are typically not uniformly distributed. On this dataset, we test clustering with both $l_{1}$ and $l_{2}$ distance as the underlying metric. + +Metric space induced by graph. Random graphs have been widely considered in testing $k$ -median methods (Balcan et al., 2013; Todo et al., 2019). Our construction of graphs follows a similar approach as the synthetic pmedinfo graphs provided by the popular OR-Library (Beasley, 1990). The metric $\rho$ for this experiment is the (weighted) shortest path distance. To generate a size- $n$ graph, we first randomly split the nodes into 10 clusters. Within each cluster, each pair of nodes is connected with probability 0.2, and with weight drawn from uniform [0, 1]. For every pair of clusters, we randomly connect some nodes from each cluster, with weights following uniform $[0.5, r]$ . A larger $r$ makes the graph more separable, i.e., clusters are farther from each other (see Appendix A for example graphs). For this task, $U$ has 3000 nodes, and the private set $D$ (500 nodes) is chosen using the similar "balanced" and "imbalanced" approaches as described above. In the imbalanced case, we choose the demand set $D$ randomly from only two clusters. + +Algorithms. We compare the following clustering algorithms in both non-DP and DP setting: (1) NDP-rand: Local search with random initialization; (2) NDP-kmedian++. Local search with $k$ -median++. initialization (Algorithm 2); (3) NDP-HST: Local search with NDP-HST initialization (Algorithm 4), as described in Section 3; (4) DP-rand: Standard DP local search algorithm (Gupta et al., 2010), which is Algorithm 7 with initial centers randomly chosen from $U$ ; (5) DP-kmedian++. DP local search with $k$ -median++. initialization run on $U$ ; (6) DP-HST: DP local search with HST-initialization (Algorithm 7). For non-DP tasks, we set $L = 6$ . For DP clustering, we use $L = 8$ . + +For non-DP methods, we set $\alpha = 10^{-3}$ in Algorithm 1 and the maximum number of iterations as 20. To examine the quality of initialization as well as the final centers, We report both the cost at initialization and the cost of the final output. For DP methods, we run the algorithms for $T = 20$ steps and report the results with $\epsilon = 1$ (comparisons/results with other $T$ and $\epsilon$ are similar). We test $k\in \{2,5,10,15,20\}$ . The average cost over $T$ iterations is reported for robustness. All the results are averaged over 10 independent repetitions. + +# 5.2 Results + +The results on MNIST and graph data are given in Figure 2. Here we present the $l_{2}$ -clustering on MNIST, and the simulated graph with $r = 1$ (clusters are less separable). The comparisons are similar for both $l_{1}$ metric on MNIST and $r = 100$ graph (see Figure 4 in Appendix A): + +- From the left column, the initial centers found by HST has lower cost than $k$ -median++ and random initialization, for both non-DP and DP setting, and for both balanced and imbalanced demand set $D$ . This confirms that the proposed HST initialization is more powerful than $k$ -median++ in finding good initial centers. +- From the right column, we also observe lower final cost of HST followed by local search in DP clustering. In the non-DP case, the final cost curves overlap, which means that despite HST offers better initial centers, local search can always find a good solution eventually. +- The advantage of DP-HST, in terms of both the initial and the final cost, is more significant when $D$ is an imbalanced subset of $U$ . As mentioned before, this is because our DP-HST initialization approach also privately incorporates the information of $D$ . + +To sum up, the proposed HST initialization scheme could perform better with various metrics and data patterns, in both non-private and private setting—in all cases, HST finds better initial centers with smaller cost than $k$ -median++. HST considerably outperforms $k$ -median++. in the private and imbalanced $D$ setting, for MNIST with both $l_{2}$ and $l_{1}$ metric, and for graph with both $r = 100$ (highly separable) and $r = 1$ (less separable). + +![](images/c8e70e52d046c72baaa21c754b4855306587b73060e017cccec665d77ab66db6.jpg) + +![](images/7aba38444eab5842cdc5493ec28bd9c6cb4f1333539f6fa7ecedc95057d91349.jpg) + +![](images/cddd3b2e607c053e22e251f0c8c1ea42c7b6fc60fc05d94fd0891b1b9c1a6747.jpg) + +![](images/0a2643d3c7534b0dd44c309d736b91a48bd3025b28739d85ff79109bb44d5854.jpg) + +![](images/38f8b987513b19659ef1ab199375a0ecd2805812fda5ce603b2469f95c4391d0.jpg) + +![](images/e308379a9a12791b16f6baa74832934e9090761f8f65d73a4e7b503a4db09a8d.jpg) + +![](images/087d420688e957b42ffb70e5bd95b08d77bb33c1b9e6a732d8b239ee44379568.jpg) +Figure 2: $k$ -median cost on the MNIST ( $l_2$ -metric) and graph dataset ( $r = 1$ ). 1st column: initial cost. 2nd column: final output cost. + +![](images/24442efe850f5dd8175768926fcd27890f4bbd58567d0fbfde7c8a889afe152b.jpg) + +# 6 Conclusion + +We develop a new initialization framework for the $k$ -median problem in the general metric space. Our approach, called HST initialization, is built upon the HST structure from metric embedding theory. We propose a novel and efficient tree search approach which provably improves the approximation error of the $k$ -median++ method, and has lower complexity (higher efficiency) than $k$ -median++ when $k$ is not too small, which is a common practice. Moreover, we propose differentially private (DP) HST initialization algorithm, which adapts to the private demand point set, leading to better clustering performance. When combined with subsequent DP local search heuristic, our algorithm is able to improve the additive error of DP local search, which is close to the theoretical lower bound within a small factor. Experiments with Euclidean metrics and graph metrics verify the effectiveness of our methods, which improve the cost of both the initial centers and the final $k$ -median output. + +# References + +Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 308-318, Vienna, Austria, 2016. +Ameer Ahmed Abbasi and Mohamed F. Younis. A survey on clustering algorithms for wireless sensor networks. Comput. Commun., 30(14-15):2826-2841, 2007. +Sara Ahmadian, Ashkan Norouzi-Fard, Ola Svensson, and Justin Ward. Better guarantees for k-means and euclidean k-median by primal-dual algorithms. In Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 61-72, Berkeley, CA, 2017. +David Arthur and Sergei Vassilitskii. k-means++: the advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1027-1035, New Orleans, LA, 2007. +Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristics for k-median and facility location problems. SIAM J. Comput., 33 (3):544-562, 2004. +Olivier Bachem, Mario Lucic, S. Hamed Hassani, and Andreas Krause. Approximate k-means++ in sublinear time. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI), pages 1459-1467, Phoenix, AZ, 2016. +Bahman Bahmani, Benjamin Moseley, Andrea Vattani, Ravi Kumar, and Sergei Vassilvitskii. Scalable k-means++. Proc. VLDB Endow., 5(7):622-633, 2012. +Maria-Florina Balcan, Steven Ehrlich, and Yingyu Liang. Distributed k-means and k-median clustering on general communication topologies. In Advances in Neural Information Processing Systems (NIPS), pages 1995-2003, Lake Tahoe, NV, 2013. +Maria-Florina Balcan, Travis Dick, Yingyu Liang, Wenlong Mou, and Hongyang Zhang. Differentially private clustering in high-dimensional euclidean spaces. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 322-331, Sydney, Australia, 2017. +Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with bregman divergences. J. Mach. Learn. Res., 6:1705-1749, 2005. +Yair Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In Proceedings of the 37th Annual Symposium on Foundations of Computer Science (FOCS), pages 184-193, Burlington, VT, 1996. +John E Beasley. OR-Library: distributing test problems by electronic mail. Journal of the Operational Research Society, 41(11):1069-1072, 1990. +Pavel Berkhin. A survey of clustering data mining techniques. In Grouping Multidimensional Data, pages 25-71. Springer, 2006. +Guy E. Blelloch, Yan Gu, and Yihan Sun. Efficient construction of probabilistic tree embeddings. In Proceedings of the 44th International Colloquium on Automata, Languages, and Programming (ICALP), pages 26:1-26:14, Warsaw, Poland, 2017. +Jaroslaw Byrka, Thomas W. Pensyl, Bartosz Rybicki, Aravind Srinivasan, and Khoa Trinh. An improved approximation for $k$ -median, and positive correlation in budgeted optimization. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 737-756, San Diego, CA, 2015. +Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems (NIPS), pages 289-296, Vancouver, Canada, 2008. +Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. J. Mach. Learn. Res., 12:1069-1109, 2011. + +Davin Choo, Christoph Grunau, Julian Portmann, and Václav Rozhon. k-means++: few more steps yield constant approximation. In International Conference on Machine Learning, pages 1909-1917, 2020. +Vincent Cohen-Addad, Silvio Lattanzi, Ashkan Norouzi-Fard, Christian Sohler, and Ola Svensson. Parallel and efficient hierarchical k-median clustering. In Advances in Neural Information Processing Systems (NeurIPS), pages 20333–20345, virtual, 2021. +Inderjit S. Dhillon and Dharmendra S. Modha. Concept decompositions for large sparse text data using clustering. Mach. Learn., 42(1/2):143-175, 2001. +Jinshuo Dong, Aaron Roth, and Weijie J Su. Gaussian differential privacy. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):3-37, 2022. +Cynthia Dwork. Differential privacy. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP), Part II, pages 1-12, Venice, Italy, 2006. +Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. J. Comput. Syst. Sci., 69(3):485-497, 2004. +Chenglin Fan and Ping Li. Distances release with differential privacy in tree and grid graph. In IEEE International Symposium on Information Theory (ISIT), pages 2190-2195, 2022. +Chenglin Fan, Ping Li, and Xiaoyun Li. Private graph all-pairwise-shortest-path distance release with improved error rate. In Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA, 2022. +Chenglin Fan, Ping Li, and Xiaoyun Li. LSDS++: Dual sampling for accelerated k-means++. In Proceedings of the International Conference on Machine Learning (ICML), pages 9640-9649, Honolulu, HI, 2023. +Huang Fang, Xiaoyun Li, Chenglin Fan, and Ping Li. Improved convergence of differential private sgd with gradient clipping. In Proceedings of the Eleventh International Conference on Learning Representations (ICLR), Kigali, Rwanda, 2023. +Dan Feldman, Amos Fiat, Haim Kaplan, and Kobbi Nissim. Private coresets. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), pages 361-370, Bethesda, MD, 2009. +Dan Feldman, Chongyuan Xiang, Ruihao Zhu, and Daniela Rus. Coresets for differentially private k-means clustering and applications to privacy in mobile sensor networks. In Proceedings of the 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pages 3-15, Pittsburgh, PA, 2017. +Jason Ge, Zhaoran Wang, Mengdi Wang, and Han Liu. Minimax-optimal privacy-preserving sparse PCA in distributed systems. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pages 1589-1598, Playa Blanca, Lanzarote, Canary Islands, Spain, 2018. +Badih Ghazi, Ravi Kumar, and Pasin Manurangsi. Differentially private clustering: Tight approximation ratios. In Advances in Neural Information Processing Systems (NeurIPS), virtual, 2020. +Christoph Grunau, Ahmet Alper Özüdogru, Václav Rozhón, and Jakub Tětek. A nearly tight analysis of greedy k-means++. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1012-1070, Florence, Italy, 2023. +Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, and Kunal Talwar. Differentially private combinatorial optimization. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1106-1125, Austin, TX, 2010. +Zhiyi Huang and Jinyan Liu. Optimal differentially private algorithms for k-means clustering. In Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (PODS), pages 395-408, Houston, TX, 2018. + +Matthew Jones, Huy L. Nguyen, and Thy D. Nguyen. Differentially private clustering via maximum coverage. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), pages 11555-11563, Virtual Event, 2021. +Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, and Angela Y. Wu. A local search approximation algorithm for k-means clustering. In Proceedings of the 18th Annual Symposium on Computational Geometry (CG), pages 10-18, Barcelona, Spain, 2002. +Leon Kaufman, Marc Vanden Eede, and Pierre Hansen. A plant and warehouse location problem. Journal of the Operational Research Society, 28(3):547-554, 1977. +Silvio Lattanzi and Christian Sohler. A better k-means++ algorithm via local search. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 3662-3671, Long Beach, CA, 2019. +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278-2324, 1998. +Ping Li and Xiaoyun Li. Differential privacy with random projections and sign random projections. In Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA, 2023a. +Xiaoyun Li and Ping Li. Differentially private one permutation hashing and bin-wise consistent weighted sampling. arXiv preprint arXiv:2306.07674, 2023b. +Stuart P. Lloyd. Least squares quantization in PCM. IEEE Trans. Inf. Theory, 28(2):129-136, 1982. +Konstantin Makarychev, Yury Makarychev, and Ilya P. Razenshteyn. Performance of johnson-lindenstrauss transform for $k$ -means and $k$ -medians clustering. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 1027-1038, Phoenix, AZ, 2019. +Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 94-103, Providence, RI, 2007. +Richard Nock, Raphaël Canyasse, Roksana Boreli, and Frank Nielsen. k-variates++: more pluses in the k-means++. In Proceedings of the 33nd International Conference on Machine Learning (ICML), pages 145-154, New York City, NY, 2016. +Girish Punj and David W Stewart. Cluster analysis in marketing research: Review and suggestions for application. Journal of Marketing Research, 20(2):134-148, 1983. +Maurizio G. C. Resende and Renato Fonseca F. Werneck. A fast swap-based local search procedure for location problems. Ann. Oper. Res., 150(1):205-230, 2007. +Rahul Shah. Faster algorithms for k-median problem on trees with smaller heights. Technical Report, 2003. +Uri Stemmer and Haim Kaplan. Differentially private k-means with constant multiplicative error. In Advances in Neural Information Processing Systems (NeurIPS), pages 5436-5446, Montréal, Canada, 2018. +Arie Tamir. An $o(pn^2)$ algorithm for the p-median and related problems on tree graphs. Oper. Res. Lett., 19(2):59-64, 1996. +Keisuke Todo, Atsuyoshi Nakamura, and Mineichi Kudo. A fast approximate algorithm for k-median problem on a graph. In Proceedings of the 15th International Workshop on Mining and Learning with Graphs (MLG), Anchorage, AK, 2019. +Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur., 15:3454-3469, 2020. + +# $k$ -Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy (Supplementary Material) + +# A More Details on Experiments + +# A.1 Examples of Graph Data + +In Figure 3, we plot two example graphs (subgraphs of 50 nodes) with $r = 100$ and $r = 1$ . When $r = 100$ , the graph is highly separable (i.e., clusters are far from each other). When $r = 1$ , the clusters are harder to be distinguished from each other. + +![](images/7801249386627e1fad5a417576a201a836d54c27083dea07e65f41f4dffc099a.jpg) + +![](images/6b767a6ddd348e41d89a0094a3df7ffe6430fd9f21850841fac67bf0ef8d2d50.jpg) +Figure 3: Example of synthetic graphs: subgraph of 50 nodes. Upper $r = 1$ . Bottom: $r = 100$ . Darker and thicker edged have smaller distance. When $r = 100$ , the graph is more separable. + +![](images/6f0be3444bd405d26f53638f43120cad3eadae21e38febfa1355f9e0163a21ce.jpg) + +![](images/97428805cd115f3648cbb93308b38bc82bb6e710ea7b849836d2d8091a22419c.jpg) + +![](images/9b98966b244237b67fe5917f0afcc49d547c44526e2c4fcf403f1ac441338e53.jpg) + +![](images/7196fff7eed19099060f35682f35d5fb601c4cd40ef692dc7b5bb3e179de9334.jpg) + +![](images/86998a040df659ff02376cc79d762c6cb37f8b591668bf53c4dcf88da3e7bdd8.jpg) + +![](images/8216df4b93b7e68c6618adbd70486970ceb74f163577e87604ba3f7ef1bde78f.jpg) + +![](images/eaf7ee4b9e3aec5423dc3b487d3810d1a7dfa28ec740baa90bc7bd3f8a86fadc.jpg) +Figure 4: $k$ -median cost on the MNIST ( $l_1$ -metric) and graph dataset ( $r = 100$ ) 1st column: initial cost. 2nd column: final output cost. + +![](images/eaa6adf9674ffd4998506431e5281dcb3f083bc6e3b4f532a7d643baf574c432.jpg) + +# A.2 More Experiments + +# A.3 Improved Iteration Cost of DP-HST + +In Theorem 4.3, we show that under differential privacy constraints, the proposed DP-HST (Algorithm 7) improves both the approximation error and the number of iterations required to find a good solution of classical DP local search (Gupta et al., 2010). In this section, we provide some numerical results to justify the theory. + +First, we need to properly measure the iteration cost of DP local search. This is because, unlike the non-private clustering, the $k$ -median cost after each iteration in DP local search is not decreasing monotonically, due to the probabilistic exponential mechanism. To this end, for the cost sequence with length $T = 20$ , we compute its moving average sequence with window size 5. Attaining the + +minimal value of the moving average indicates that the algorithm has found a "local optimum", i.e., it has reached a "neighborhood" of solutions with small clustering cost. Thus, we use the number of iterations to reach such local optimum as the measure of iteration cost. The results are provided in Figure 5. We see that on all the tasks (MNIST with $l_{1}$ and $l_{2}$ distance, and graph dataset with $r = 1$ and $r = 100$ ), DP-HST has significantly smaller iterations cost. In Figure 6, we further report the $k$ -median cost of the best solution in $T$ iterations found by each DP algorithm. We see that DP-HST again provide the smallest cost. This additional set of experiments again validates the claims of Theorem 4.3, that DP-HST is able to find better initial centers in fewer iterations. + +![](images/71f19bc752e0f7e36459d91a330735a4ecbefb55e737c7fe50eb40b797faeeed.jpg) + +![](images/c7a186da5478b1a2e72e88883138c116a51f8092bfac2449dd4b245844ebb10e.jpg) + +![](images/b8ce09e8bc8c9dbf78dac473c138102a198cc1494d7c87214e87520dcb939133.jpg) +Figure 5: Iteration cost to reach a locally optimal solution, on MNIST and graph datasets with different $k$ . The demand set is an imbalanced subset of the universe. + +![](images/beddfcc9318965f60234c94ecacef38bf12e2c572f294ec1e8f11deea7ddd7ae.jpg) + +![](images/440b79e0a9ad9bb882f61c5dda7e1b1105c17c759a0ed888320cf6f2e1841914.jpg) +Figure 6: The $k$ -median cost of the best solution found by each differentially private algorithm. The demand set is an imbalanced subset of the universe. Same comparison holds on graph data. + +![](images/4116b449c10ee4b8430438ab6ddd1a373c067dc7f8c56024633f48d4940ada99.jpg) + +# B Technical Proofs + +The following composition result of differential privacy will be used in our proof. + +Theorem B.1 (Composition Theorem (Dwork, 2006)). If Algorithms $\mathbb{A}_1,\mathbb{A}_2,\dots,\mathbb{A}_m$ are $\epsilon_1,\epsilon_2,\ldots ,\epsilon_m$ differentially private respectively, then the union $(\mathbb{A}_1(D),\mathbb{A}_2(D),\ldots ,\mathbb{A}_m(D))$ is $\sum_{i = 1}^{m}\epsilon_{i}$ -DP. + +# B.1 Proof of Lemma 3.5 + +Proof. Consider the intermediate output of Algorithm 4, $C_1 = \{v_1, v_2, \dots, v_k\}$ , which is the set of roots of the minimal subtrees each containing exactly one output center $C_0$ . Suppose one of the optimal "root set" that minimizes (4) is $C_1^* = \{v_1', v_2', \dots, v_k'\}$ . If $C_1 = C_1^*$ , the proof is done. Thus, we prove the case for $C_1 \neq C_1^*$ . Note that $T(v), v \in C_1$ are disjoint subtrees. We have the following reasoning. + +- Case 1: for some $i, j'$ , $v_i$ is a descendant node of $v_j'$ . Since the optimal center point $f^*$ is a leaf node by the definition of (4), we know that there must exist one child node of $v_j'$ that expands a subtree which contains $f^*$ . Therefore, we can always replace $v_j'$ by one of its child nodes. Hence, we can assume that $v_i$ is not a descendant of $v_j'$ . + +Note that, we have $score(v_j') \leq score(v_i)$ if $v_j' \notin C_1^* \cap C_1$ . Algorithm 4 sorts all the nodes based on cost value, and it would have more priority to pick $v_j'$ than $v_i$ if $score(v_j') > score(v_i)$ and $v_i$ is not a child node of $v_j'$ . + +- Case 2: for some $i, j'$ , $v_j'$ is a descendant of $v_i$ . In this case, optimal center point $f^*$ , which is a leaf of $T(v_i)$ , must also be a leaf node of $T(v_j')$ . We can simply replace $C_1$ with the swap $C_1 \setminus \{v_i\} + \{v_j'\}$ which does not change $\text{cost}_k^{T'}(U)$ . Hence, we can assume that $v_j'$ is not a descendant of $v_i$ . +- Case 3: Otherwise. By the construction of $C_1$ , we know that $score(v_j') \leq \min\{score(v_i), i = 1, \dots, k\}$ when $v_j' \in C_1^* \setminus C_1$ . Consider the swap between $C_1$ and $C_1^*$ . By the definition of tree distance, we have $OPT_k^T(U) \geq \sum_{v_i \in C_1 \setminus C_1^*} N_{v_i} 2^{h_{v_i}}$ , since $\{T(v_i), v_i \in C_1 \setminus C_1^*\}$ does not contain any center of the optimal solution determined by $C_1^*$ (which is also the optimal "root set" for $OPT_k^T(U)$ ). + +Thus, we only need to consider Case 3. Let us consider the optimal clustering with center set be $C^* = \{c_1^*, c_2^*, \dots, c_k^*\}$ (each center $c_j^*$ is a leaf of subtree whose root be $c_j'$ ), and $S_j'$ be the leaves assigned to $c_j^*$ . Let $S_j$ denote the set of leaves in $S_j'$ whose distance to $c_j^*$ is strictly smaller than its distance to any centers in $C_1$ . Let $P_j$ denote the union of paths between leaves of $S_j$ to its closest center in $C_1$ . Let $v_j''$ be the nodes in $P_j$ with highest level satisfying $T(v_j'') \cap C_1 = \emptyset$ . The score of $v_j''$ is $2^{h_{v_j''}} N(v_j'')$ . That means the swap with a center $v_j'$ into $C_1$ can only reduce $4 \cdot 2^{h_{v_j''}} N(v_j'')$ to $cost_k^{T'}(U)$ (the tree distance between any leaf in $S_j$ and its closest center in $C_1$ is at most $4 \cdot 2^{h_{v_j''}}$ ). We just use $v_j'$ to represent $v_j''$ for later part of this proof for simplicity. By our reasoning, summing all the swaps over $C_1^* \setminus C_1$ gives + +$$ +\operatorname{cost}_{k}^{T^{\prime}}(U) - \operatorname{OPT}_{k}^{T}(U)\leq 4\sum_{v^{\prime}_{j}\in C_{1}^{*}\setminus C_{1}}N_{v^{\prime}_{j}}2^{h_{v^{\prime}_{j}}}, +$$ + +$$ +O P T _ {k} ^ {T} (U) \geq \sum_ {v _ {i} \in C _ {1} \backslash C _ {1} ^ {*}} N _ {v _ {i}} 2 ^ {h _ {v _ {i}}}. +$$ + +Also, based on our discussion on Case 1, it holds that + +$$ +N _ {v _ {j} ^ {\prime}} 2 ^ {h _ {v _ {j} ^ {\prime}}} - N _ {v _ {i}} 2 ^ {h _ {v _ {i}}} \leq 0. +$$ + +Summing them together, we have $\text{cost}_k^{T'}(U) \leq 5OPT_k^T(U)$ . + +# B.2 Proof of Lemma 3.6 + +Proof. Since the subtrees in $C_1$ are disjoint, it suffices to consider one subtree with root $v$ . With a little abuse of notation, let $cost_1^{T'}(v, U)$ denote the optimal $k$ -median cost within the point set $T(v)$ with one center in 2-HST: + +$$ +\operatorname {c o s t} _ {1} ^ {T ^ {\prime}} (v, U) = \min _ {x \in T (v)} \sum_ {y \in T (v)} \rho^ {T} (x, y), \tag {7} +$$ + +which is the optimal cost within the subtree. Suppose $v$ has more than one children $u, w, \ldots$ , otherwise the optimal center is clear. Suppose the optimal solution of $cost_1^{T'}(v, U)$ chooses a leaf node in $T(u)$ , and our HST initialization algorithm picks a leaf of $T(w)$ . If $u = w$ , then HST chooses the optimal one where the argument holds trivially. Thus, we consider $u \neq w$ . We have the following two observations: + +- Since one needs to pick a leaf of $T(u)$ to minimize $cost_1^{T'}(v, U)$ , we have $cost_1^{T'}(v, U) \geq \sum_{x \in ch(v), x \neq u} N_x \cdot 2^{h_x}$ where $ch(u)$ denotes the children nodes of $u$ . +- By our greedy strategy, $\text{cost}_1^T(v, U) \leq \sum_{x \in \text{ch}(u)} N_x \cdot 2^{h_x} \leq \text{cost}_1^{T'}(v, U) + N_u \cdot 2^{h_u}$ . + +Since $h_u = h_w$ , we have + +$$ +2 ^ {h _ {u}} \cdot (N _ {u} - N _ {w}) \leq 0, +$$ + +since our algorithm picks subtree roots with highest scores. Then we have $cost_1^T(v, U) \leq cost_1^{T'}(v, U) + N_w \cdot 2^{h_w} \leq 2cost_1^{T'}(v, U)$ . Since the subtrees in $C_1$ are disjoint, the union of centers for $OPT_1^T(v, U)$ , $v \in C_1$ forms the optimal centers with size $k$ . Note that, for any data point $p \in U \setminus C_1$ , the tree distance $\rho^T(p, f)$ for $\forall f$ that is a leaf node of $T(v)$ , $v \in C_1$ is the same. That is, the choice of leaf in $T(v)$ as the center does not affect the $k$ -median cost under 2-HST metric. Therefore, union bound over $k$ subtree costs completes the proof. + +# B.3 Proof of Proposition 3.2 + +Proof. It is known that the 2-HST can be constructed in $O(dn\log n)$ (Bartal, 1996). The subtree search in Algorithm 4 involves at most sorting all the nodes in the HST based on the score, which takes $O(n\log n)$ . We use a priority queue to store the nodes in $C_1$ . When we insert a new node $v$ into queue, its parent node (if existing in the queue) would be removed from the queue. The number of nodes is $O(n)$ and each operation (insertion, deletion) in a priority queue based on score has $O(\log n)$ complexity. Lastly, the total time to obtain $C_0$ is $O(n)$ , as the FIND-LEAF only requires a top down scan in $k$ disjoint subtrees of $T$ . Summing parts together proves the claim. + +# B.4 Proof of Theorem 4.2 + +Similarly, we prove the error in general metric by first analyzing the error in 2-HST metric. Then the result follows from Lemma 3.4. Let $\text{cost}_k^T(D)$ , $\text{cost}_k^{T'}(D)$ and $\text{OPT}_k^T(D)$ be defined analogously to (3), (4) and (5), where " $y \in U$ " in the summation is changed into " $y \in D$ " since $D$ is the demand set. That is, + +$$ +\operatorname {c o s t} _ {k} ^ {T} (D) = \sum_ {y \in D} \min _ {x \in C _ {0}} \rho^ {T} (x, y), \tag {8} +$$ + +$$ +\operatorname {c o s t} _ {k} ^ {T ^ {\prime}} (D, C _ {1}) = \min _ {| F \cap T (v) | = 1, \forall v \in C _ {1}} \sum_ {y \in D} \min _ {x \in F} \rho^ {T} (x, y), \tag {9} +$$ + +$$ +O P T _ {k} ^ {T} (D) = \min _ {F \subset D, | F | = k} \sum_ {y \in D} \min _ {x \in F} \rho^ {T} (x, y) \equiv \min _ {C _ {1} ^ {\prime}} c o s t _ {k} ^ {T \prime} (D, C _ {1} ^ {\prime}). \tag {10} +$$ + +We have the following. + +Lemma B.2. $\text{cost}_k^T(D) \leq 10OPT_k^T(D) + 10ck\epsilon^{-1}\triangle \log n$ with probability $1 - 4k/n^c$ . + +Proof. The result follows by combining the following Lemma B.4, Lemma B.5, and applying union bound. $\square$ + +Lemma B.3. For any node $v$ in $T$ , with probability $1 - 1/n^c$ , $|\hat{N}_v \cdot 2^{h_v} - N_v \cdot 2^{h_v}| \leq c\epsilon^{-1}\triangle \log n$ . + +Proof. Since $\hat{N}_v = N_v + \text{Lap}(2^{(L - h_v) / 2} / \epsilon)$ , we have + +$$ +P r \left[ \left| \hat {N} _ {v} - N _ {v} \right| \geq x / \epsilon \right] = e x p \left(- x / 2 ^ {\left(L - h _ {v}\right)}\right). +$$ + +As $L = \log \triangle$ , we have + +$$ +P r \left[ | \hat {N} _ {v} - N _ {v} | \geq x \triangle / \left(2 ^ {h _ {v}} \epsilon\right) \right] \leq e x p (- x). +$$ + +Hence, for some constant $c > 0$ + +$$ +P r \left[ \left| \hat {N} _ {v} \cdot 2 ^ {h _ {v}} - N _ {v} \cdot 2 ^ {h _ {v}} \right| \leq c \epsilon^ {- 1} \triangle \log n \right] \geq 1 - e x p (- c \log n) = 1 - 1 / n ^ {c}. +$$ + +![](images/bde034e65431913bedb11158286353444188a0a0f63a89727b84fd23b0af478f.jpg) + +Lemma B.4 (DP Subtree Search). With probability $1 - 2k / n^{c}$ , $\text{cost}_{k}^{T'}(D) \leq 5OPT_{k}^{T}(D) + 4ck\epsilon^{-1}\triangle \log n$ . + +Proof. The proof is similar to that of Lemma 3.5. Consider the intermediate output of Algorithm 4, $C_1 = \{v_1, v_2, \dots, v_k\}$ , which is the set of roots of the minimal disjoint subtrees each containing exactly one output center $C_0$ . Suppose one of the optimal "root set" that minimizes (4) is $C_1^* = \{v_1', v_2', \dots, v_k'\}$ . Assume $C_1 \neq C_1^*$ . By the same argument as the proof of Lemma 3.5, we consider for some $i, j$ such that $v_i \neq v_j'$ , where $v_i$ is not a descendant of $v_j'$ and $v_j'$ is either a descendant of $v_i$ . By the construction of $C_1$ , we know that $score(v_j') \leq \min \{score(v_i), i = 1, \dots, k\}$ when $v_j' \in C_1^* \setminus C_1$ . Consider the swap between $C_1$ and $C_1^*$ . By the definition of tree distance, we have $OPT_k^T(U) \geq \sum_{v_i \in C_1 \setminus C_1^*} N_{v_i} 2^{h_{v_i}}$ , since $\{T(v_i), v_i \in C_1 \setminus C_1^*\}$ does not contain any center of the optimal solution determined by $C_1^*$ (which is also the optimal "root set" for $OPT_k^T$ ). + +Let us consider the optimal clustering with center set be $C^* = \{c_1^*, c_2^*, \dots, c_k^*\}$ (each center $c_j^*$ is a leaf of subtree whose root be $c_j'$ ), and $S_j'$ be the leaves assigned to $c_j^*$ . Let $S_j$ denote the set of leaves in $S_j'$ whose distance to $c_j^*$ is strictly smaller than its distance to any centers in $C_1$ . Let $P_j$ denote the union of paths between leaves of $S_j$ to its closest center in $C_1$ . Let $v_j''$ be the nodes in $P_j$ with highest level satisfying $T(v_j'') \cap C_1 = \emptyset$ . The score of $v_j''$ is $2^{h_{v_j''}} N(v_j'')$ . That means the swap with a center $v_j'$ into $C_1$ can only reduce $4 \cdot 2^{h_{v_j''}} N(v_j'')$ to $cost_k^{T'}(U)$ (the tree distance between any leaf in $S_j$ and its closest center in $C_1$ is at most $4 \cdot 2^{h_{v_j''}}$ ). We just use $v_j'$ to represent $v_j''$ for later part of this proof for simplicity. Summing all the swaps over $C_1^* \setminus C_1$ , we obtain + +$$ +\operatorname{cost}_{k}^{T^{\prime}}(U) - OPT_{k}^{T}(U)\leq 4\sum_{v^{\prime}_{j}\in C_{1}^{*}\backslash C_{1}}N_{v^{\prime}_{j}}2^{h_{v^{\prime}_{j}}}, +$$ + +$$ +O P T _ {k} ^ {T} (U) \geq \sum_ {v _ {i} \in C _ {1} \backslash C _ {1} ^ {*}} N _ {v _ {i}} 2 ^ {h _ {v _ {i}}}. +$$ + +Applying union bound with Lemma B.3, with probability $1 - 2 / n^{c}$ , we have + +$$ +N _ {v _ {j} ^ {\prime}} 2 ^ {h _ {v _ {j} ^ {\prime}}} - N _ {v _ {i}} 2 ^ {h _ {v _ {i}}} \leq 2 c \epsilon^ {- 1} \triangle \log n. +$$ + +Consequently, we have with probability, $1 - 2k / n^{c}$ + +$$ +\begin{array}{l} \operatorname {c o s t} _ {k} ^ {T \prime} (D) \leq 5 O P T _ {k} ^ {T} (D) + 4 c | C _ {1} \backslash C _ {1} ^ {*} | \epsilon^ {- 1} \triangle \log n \\ \leq 5 O P T _ {k} ^ {T} (D) + 4 c k \epsilon^ {- 1} \triangle \log n, \\ \end{array} +$$ + +which proves the claim. + +![](images/4e3c7a8dd34392a0d54ba8d44a5bcdfd6df4eb82539d06ca4ea70ec3d56cdb79.jpg) + +Lemma B.5 (DP Leaf Search). With probability $1 - 2k / n^{c}$ , Algorithm 6 produces initial centers with $\mathrm{cost}_k^T (D)\leq 2\mathrm{cost}_k^{T'}(D) + 2ck\epsilon^{-1}\triangle \log n$ + +Proof. The proof strategy follows Lemma 3.6. We first consider one subtree with root $v$ . Let $cost_1^{T'}(v, U)$ denote the optimal $k$ -median cost within the point set $T(v)$ with one center in 2-HST: + +$$ +\operatorname {c o s t} _ {1} ^ {T ^ {\prime}} (v, D) = \min _ {x \in T (v)} \sum_ {y \in T (v) \cap D} \rho^ {T} (x, y). \tag {11} +$$ + +Suppose $v$ has more than one children $u, w, \ldots$ , and the optimal solution of $cost_1^{T'}(v, U)$ chooses a leaf node in $T(u)$ , and our HST initialization algorithm picks a leaf of $T(w)$ . If $u = w$ , then HST chooses the optimal one where the argument holds trivially. Thus, we consider $u \neq w$ . We have the following two observations: + +- Since one needs to pick a leaf of $T(u)$ to minimize $cost_1^{T'}(v, U)$ , we have $cost_1^{T'}(v, U) \geq \sum_{x \in ch(v), x \neq u} N_x \cdot 2^{h_x}$ where $ch(u)$ denotes the children nodes of $u$ . +- By our greedy strategy, $\text{cost}_1^T(v, U) \leq \sum_{x \in \text{ch}(u)} N_x \cdot 2^{h_x} \leq \text{cost}_1^{T'}(v, U) + N_u \cdot 2^{h_u}$ . + +As $h_u = h_w$ , leveraging Lemma B.3, with probability $1 - 2 / n^c$ + +$$ +\begin{array}{l} 2 ^ {h _ {u}} \cdot \left(N _ {u} - N _ {w}\right) \leq 2 ^ {h _ {u}} \left(\hat {N} _ {u} - \hat {N} _ {w}\right) + 2 c \epsilon^ {- 1} \triangle \log n \\ \leq 2 c \epsilon^ {- 1} \triangle \log n. \\ \end{array} +$$ + +since our algorithm picks subtree roots with highest scores. Then we have $cost_1^T(v, D) \leq cost_k^{T'}(v, D) + N_w \cdot 2^{h_u} + 2c\epsilon^{-1}\triangle \log n \leq 2cost_k^{T'}(v, D) + 2c\epsilon^{-1}\triangle \log n$ with high probability. Lastly, applying union bound over the disjoint $k$ subtrees gives the desired result. + +# B.5 Proof of Theorem 4.3 + +Proof. The privacy analysis is straightforward, by using the composition theorem (Theorem B.1). Since the sensitivity of $cost(\cdot)$ is $\triangle$ , in each swap iteration the privacy budget is $\epsilon / 2(T + 1)$ . Also, we spend another $\epsilon / 2(T + 1)$ privacy for picking a output. Hence, the total privacy is $\epsilon / 2$ for local search. Algorithm 6 takes $\epsilon / 2$ DP budget for initialization, so the total privacy is $\epsilon$ . + +The analysis of the approximation error follows from Gupta et al. (2010), where the initial cost is reduced by our private HST method. We need the following two lemmas. + +Lemma B.6 (Gupta et al. (2010)). Assume the solution to the optimal utility is unique. For any output $o \in O$ of $2\triangle \epsilon - DP$ exponential mechanism on dataset $D$ , it holds for $\forall t > 0$ that + +$$ +P r [ q (D, o) \leq \max _ {o \in O} q (D, o) - (\ln | O | + t) / \epsilon ] \leq e ^ {- t}, +$$ + +where $|O|$ is the size of the output set. + +Lemma B.7 (Arya et al. (2004)). For any set $F \subseteq D$ with $|F| = k$ , there exists some swap $(x, y)$ such that the local search method admits + +$$ +c o s t _ {k} (F, D) - c o s t _ {k} (F - \{x \} + \{y \}, D) \geq \frac {c o s t _ {k} (F , D) - 5 O P T (D)}{k}. +$$ + +From Lemma B.7, we know that when $\text{cost}_k(F_i, D) > 6OPT(D)$ , there exists a swap $(x, y)$ s.t. + +$$ +\operatorname {c o s t} _ {k} \left(F _ {i} - \{x \} + \{y \}, D\right) \leq \left(1 - \frac {1}{6 k}\right) \operatorname {c o s t} _ {k} \left(F _ {i}, D\right). +$$ + +At each iteration, there are at most $n^2$ possible outputs (i.e., possible swaps), i.e., $|O| = n^2$ . Using Lemma B.6 with $t = 2\log n$ , for $\forall i$ , + +$$ +P r [ c o s t _ {k} (F _ {i + 1}, D) \geq c o s t _ {k} (F _ {i + 1} ^ {*}, D) + 4 \frac {\log n}{\epsilon^ {\prime}} ] \geq 1 - 1 / n ^ {2}, +$$ + +where $\text{cost}_k(F_{i+1}^*, D)$ is the minimum cost among iteration 1, 2, ..., $t + 1$ . Hence, we have that as long as $\text{cost}(F_i, D) > 6OPT(D) + \frac{24k\log n}{\epsilon'}$ , the improvement in cost is at least by a factor of $(1 - \frac{1}{6k})$ . By Theorem 4.2, we have $\text{cost}_k(F_1, D) \leq C(\log n)(6OPT(D) + 6k\triangle\log n/\epsilon)$ for some constant $C > 0$ . Let $T = 6Ck\log\log n$ . We have that + +$$ +\begin{array}{l} E [ \operatorname {c o s t} (F _ {i}, D) ] \leq (6 O P T (D) + 6 k \epsilon^ {- 1} \triangle \log n) C (\log n) (1 - 1 / 6 k) ^ {6 C k \log \log n} \\ \leq 6 O P T (D) + 6 k \epsilon^ {- 1} \triangle \log n \leq 6 O P T (D) + \frac {2 4 k \log n}{\epsilon^ {\prime}}. \\ \end{array} +$$ + +Therefore, with probability at least $\left(1 - T / n^{2}\right)$ , there exists an $i \leq T$ s.t. $cost(F_{i},D) \leq 6OPT(D) + \frac{24k\log n}{\epsilon'}$ . Then by using the Lemma B.7, one will pick an $F_{j}$ with additional additive error $4\ln n / \epsilon'$ to the min $\{cost(F_j,D), j = 1,2,\dots,T\}$ with probability $1 - 1 / n^2$ . Consequently, we know that the expected additive error is + +$$ +2 4 k \triangle \log n / \epsilon^ {\prime} + 4 \log n / \epsilon^ {\prime} = O \left(\epsilon^ {- 1} k ^ {2} \triangle (\log \log n) \log n\right), +$$ + +with probability $1 - 1/poly(n)$ . + +# C Extend HST Initialization to $k$ -Means + +Naturally, our HST method can also be applied to $k$ -means clustering problem. In this section, we extend the HST to $k$ -means and provide some brief analysis similar to $k$ -median. We present the analysis in the non-private case, which can then be easily adapted to the private case. Define the following costs for $k$ -means. + +$$ +\operatorname {c o s t} _ {k m} ^ {T} (U) = \sum_ {y \in U} \min _ {x \in C _ {0}} \rho^ {T} (x, y) ^ {2}, \tag {12} +$$ + +$$ +\operatorname {c o s t} _ {k m} ^ {T} ^ {\prime} (U, C _ {1}) = \min _ {| F \cap T (v) | = 1, \forall v \in C _ {1}} \sum_ {y \in U} \min _ {x \in F} \rho^ {T} (x, y) ^ {2}, \tag {13} +$$ + +$$ +O P T _ {k m} ^ {T} (U) = \min _ {F \subset U, | F | = k} \sum_ {y \in U} \min _ {x \in F} \rho^ {T} (x, y) ^ {2} \equiv \min _ {C _ {1} ^ {\prime}} c o s t _ {k m} ^ {T} ^ {\prime} (U, C _ {1} ^ {\prime}). \tag {14} +$$ + +For simplicity, we will use $cost_{km}^{T}'(U)$ to denote $cost_{km}^{T}(U, C_{1})$ if everything is clear from context. Here, $OPT_{km}^{T}$ (14) is the cost of the global optimal solution with 2-HST metric. + +Lemma C.1 (Subtree search). $cost_{km}^{T}'(U) \leq 17OPT_{km}^{T}(U)$ . + +Proof. The analysis is similar with the proof of Lemma 3.5. Thus, we mainly highlight the difference. Let us just use some notations the same as in Lemma 3.5 here. Let us consider the clustering with center set be $C^* = \{c_1^*, c_2^*, \ldots, c_k^*\}$ (each center $c_j^*$ is a leaf of subtree whose root be $c_j'$ ), and $S_j'$ be the leaves assigned to $c_j^*$ in optimal k-means clustering in tree metric. Let $S_j$ denote the set of leaves in $S_j'$ whose distance to $c_j^*$ is strictly smaller than its distance to any centers in $C_1$ . Let $P_j$ denote the union of paths between leaves of $S_j$ to its closest center in $C_1$ . Let $v_j''$ be the nodes in $P_j$ with highest level satisfying $T(v_j'') \cap C_1 = \emptyset$ . The score of $v_j''$ is $2^{h_{v_j''}} N(v_j'')$ . That means the swap with a center $v_j'$ into $C_1$ can only reduce $(4 \cdot 2^{h_{v_j''}})^2 N(v_j'')$ to $cost_{km}'(U)$ . We just use $v_j'$ to represent $v_j''$ for later part of this proof for simplicity. By our reasoning, summing all the swaps over $C_1^* \setminus C_1$ gives + +$$ +c o s t _ {k m} ^ {T} ^ {\prime} (U) - O P T _ {k m} ^ {T} (U) \leq \sum_ {v _ {j} ^ {\prime} \in C _ {1} ^ {*} \backslash C _ {1}} N _ {v _ {j} ^ {\prime}} \cdot (4 \cdot 2 ^ {h _ {v _ {j} ^ {\prime}}}) ^ {2}, +$$ + +$$ +O P T _ {k m} ^ {T} (U) \geq \sum_ {v _ {i} \in C _ {1} \backslash C _ {1} ^ {*}} N _ {v _ {i}} \left(2 ^ {h _ {v _ {i}}}\right) ^ {2}. +$$ + +Also, based on our discussion on Case 1, it holds that + +$$ +N _ {v _ {j} ^ {\prime}} 2 ^ {h _ {v _ {j} ^ {\prime}}} - N _ {v _ {i}} 2 ^ {h _ {v _ {i}}} \leq 0. +$$ + +Summing them together, we have $\text{cost}_{km}^T'(U) \leq 17\text{OPT}_{km}^T(U)$ . + +![](images/1b136e570305b3e80697a98704355610a063389aaf85aab8eb5bd3a84b84b7d7.jpg) + +Next, we show that the greedy leaf search strategy (Algorithm 5) only leads to an extra multiplicative error of 2. + +Lemma C.2 (Leaf search). $cost_{km}^{T}(U) \leq 2cost_{km}^{T}'(U)$ . + +Proof. Since the subtrees in $C_1$ are disjoint, it suffices to consider one subtree with root $v$ . With a little abuse of notation, let $cost_1^{T'}(v, U)$ denote the optimal $k$ -means cost within the point set $T(v)$ with one center in 2-HST: + +$$ +\operatorname {c o s t} _ {1} ^ {T ^ {\prime}} (v, U) = \min _ {x \in T (v)} \sum_ {y \in T (v)} \rho^ {T} (x, y) ^ {2}, \tag {15} +$$ + +which is the optimal cost within the subtree. Suppose $v$ has more than one children $u, w, \ldots$ , otherwise the optimal center is clear. Suppose the optimal solution of $cost_1^{T'}(v, U)$ chooses a leaf node in $T(u)$ , and our HST initialization algorithm picks a leaf of $T(w)$ . If $u = w$ , then HST chooses the optimal one where the argument holds trivially. Thus, we consider $u \neq w$ . We have the following two observations: + +- Since one needs to pick a leaf of $T(u)$ to minimize $cost_1^{T'}(v, U)$ , we have $cost_1^{T'}(v, U) \geq \sum_{x \in ch(v), x \neq u} N_x \cdot (2^{h_x})^2$ where $ch(u)$ denotes the children nodes of $u$ . +- By our greedy strategy, $\text{cost}_1^T(v, U) \leq \sum_{x \in \text{ch}(u)} N_x \cdot (2^{h_x})^2 \leq \text{cost}_1^{T'}(v, U) + N_u \cdot (2^{h_u})^2$ . + +Since $h_u = h_w$ , we have + +$$ +2 ^ {h _ {u}} \cdot (N _ {u} - N _ {w}) \leq 0, +$$ + +since our algorithm picks subtree roots with highest scores. Then we have $cost_1^T(v, U) \leq cost_1^{T'}(v, U) + N_w \cdot (2^{h_w})^2 \leq 2cost_1^{T'}(v, U)$ . Since the subtrees in $C_1$ are disjoint, the union of centers for $OPT_1^T(v, U)$ , $v \in C_1$ forms the optimal centers with size $k$ . Note that, for any data point $p \in U \setminus C_1$ , the tree distance $\rho^T(p, f)$ for $\forall f$ that is a leaf node of $T(v)$ , $v \in C_1$ is the same. That is, the choice of leaf in $T(v)$ as the center does not affect the $k$ -median cost under 2-HST metric. Therefore, union bound over $k$ subtree costs completes the proof. + +We are ready to state the error bound for our proposed HST initialization (Algorithm 4), which is a natural combination of Lemma C.1 and Lemma C.2. + +Theorem C.3 (HST initialization). $cost_{km}^{T}(U) \leq 34OPT_{km}^{T}(U)$ . + +We have the following result based on Lemma 3.4. + +Theorem C.4. In a general metric space, + +$$ +E \left[ \operatorname {c o s t} _ {k m} (U) \right] = O \left(\min \left\{\log n, \log \bigtriangleup \right\}\right) ^ {2} O P T _ {k m} (U). +$$ \ No newline at end of file diff --git a/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/images.zip b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..98e38192f815e44095147c6efbc7b2529af7a841 --- /dev/null +++ b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd9ba5484eaa3daa0ecb52190644b756b9fe01dbcf33b0137b8d6cb69cf73a11 +size 1069253 diff --git a/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/layout.json b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bd0b13264133551601bc1adb59678b65a3359f12 --- /dev/null +++ b/kmedianclusteringviametricembeddingtowardsbetterinitializationwithdifferentialprivacy/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5870135c672a1faf0d9223ef353fe123d63625c39a784058fef4229079668c2a +size 1227958 diff --git a/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_content_list.json b/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..be5f87c5eeb0daa516cfc0b0f40ab1819a0f04e4 --- /dev/null +++ b/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad0b410e8bcb102d8e95a88f7d772a1b017837e6b761909b519573f2e24cc2e3 +size 147565 diff --git a/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_model.json b/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9167bd95b1f8c348faa32b5a2010d99b42cdd727 --- /dev/null +++ b/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08b7b93356fcc5e07c4b91ab2083793b9d3dcf942a150eecb3d99a03a9943dda +size 177134 diff --git a/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_origin.pdf b/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..845ce25867d011588761ecd09813ab68f8d8c5ce --- /dev/null +++ b/rppgtoolboxdeepremoteppgtoolbox/ccf1513f-6339-48d5-896d-284ac3142418_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b68511aa6b74286ffcaac10898d5f52254d664e85c769890f0c12c6ccb460e94 +size 1811144 diff --git a/rppgtoolboxdeepremoteppgtoolbox/full.md b/rppgtoolboxdeepremoteppgtoolbox/full.md new file mode 100644 index 0000000000000000000000000000000000000000..447372b064b1ad09eb75c6a0e2fe960d8703e383 --- /dev/null +++ b/rppgtoolboxdeepremoteppgtoolbox/full.md @@ -0,0 +1,523 @@ +# rPPG-Toolbox: Deep Remote PPG Toolbox + +Xin Liu1, Girish Narayanswamy1*, Akshay Paruchuri2*, Xiaoyu Zhang3, Jiankai Tang3, Yuzhe Zhang3, Roni Sengupta2, Shwetak Patel1, Yuntao Wang3, Daniel McDuff1 +1University of Washington Seattle1 +2University of North Carolina at Chapel Hill2 +Tsinghua University3 +{xliu0, girishvn, dmcudff}@cs.washington.edu, akshay@cs.unc.edu +* Equal Contribution + +# Abstract + +Camera-based physiological measurement is a fast growing field of computer vision. Remote photoplethysmography (rPPG) utilizes imaging devices (e.g., cameras) to measure the peripheral blood volume pulse (BVP), and enables cardiac measurement via webcams and smartphones. However, the task is non-trivial with important pre-processing, modeling, and post-processing steps required to obtain state-of-the-art results. Replication of results and benchmarking of new models is critical for scientific progress; however, as with many other applications of deep learning, reliable codebases are not easy to find or use. We present a comprehensive toolbox, rPPG-Toolbox, that contains unsupervised and supervised rPPG models with support for public benchmark datasets, data augmentation, and systematic evaluation: https://github.com/ubicomplab/rPPG-Toolbox + +# 1 Introduction + +The vision of ubiquitous computing is to embed computation into everyday objects to enable them to perform useful tasks. The sensing of physiological vital signs is one such task and plays an important role in how health is understood and managed. Cameras are both ubiquitous and versatile sensors, and the transformation of cameras into accurate health sensors has the potential to make the measurement of health signals more comfortable and accessible. Examples of the applications of this technology include systems for monitoring neonates [1], dialysis patients [2], and the detection of arrhythmias [3]. + +Building on advances in computer vision, camera-based measurement of physiological vitals signs has developed into a research field of its own [? ]. Researchers have developed methods for measuring cardiac and pulmonary signals by analyzing skin pixel changes over time. Recently, several companies have been granted FDA De Novo status for products that use software algorithms to analyze video and estimate pulse rate, heart rate, respiratory rate and/or breathing rate $^{12}$ . + +There are hundreds of computational architectures that have been proposed for the measurement of cardiopulmonary signals. Unsupervised signal processing methods leverage techniques such as Independent Component Analysis (ICA) or Principal Component Analysis (PCA) and assumptions about the periodicity or structure of the underlying blood volume pulse waveform. Neural network architectures can be trained in a supervised fashion using videos with synchronized gold-standard ground truth signals [4, 5, 6, 7]. Innovative data generation [8] and augmentation [9], meta-learning for personalization [10, 11], federated learning [12], and unsupervised pretraining [13, 14, 15, 16] have been widely explored in the field of camera-based physiological sensing and have led + +![](images/0767bffe9287132cd600781a6ac5a64456ccc5c9bc4be02768bb5ebb1050942f.jpg) +Figure 1: rPPG Pipeline. An example of the components of an rPPG pipeline including preprocessing, training, inference, and evaluation. + +![](images/228198d2e42ac26feeac3d3e644bdd4f868f54086a68e7fec32a3324eeb39de0.jpg) + +to significant improvements in state-of-the-art performance. Further information regarding the background, algorithms, and potential applications of rPPG are included in the Appendix-B and C. + +However, standardization in the field is still severely lacking. Based on our review of literature in the space, we identified four issues that have hindered the interpretation of results in many papers. First, and perhaps most obviously, a number of the published works are not accompanied by public code. While publishing code repositories with papers is now fairly common in the machine learning and computer vision research communities, it is far less common in the field of camera-based physiological sensing. While there are reasons that it might be difficult to release datasets (e.g., medical data privacy), we cannot find good arguments for not releasing code. Second, many papers do not compare to previously published methods in an "apples-to-apples" fashion. This point is a little more subtle, but rather than performing systematic side-by-side comparisons between methods, the papers compare to numerical results from previous work, even if the training sets and/or test sets are not identical (e.g., test samples were filtered because they were deemed to not have reliable labels). Unfortunately, this often makes it unclear if performance differences are due to data, preprocessing steps, model design, post-processing, training schemes and hardware specifications, or a combination of the aforementioned. Continuing this thread, the third flaw is that papers use pre- and post-processing steps that are not adequately described. Finally, different researchers compute the "labels" (e.g., heart rate) using their own methods from the contact PPG or ECG time-series data. Differences in these methods lead to different labels and a fundamental issue when it comes to benchmarking performance. When combined, the aforementioned issues make it very difficult to draw conclusions from the literature about the optimal choices for the design of rPPG systems. + +Open source codes allow researchers to compare novel approaches to consistent baselines without ambiguity regarding the implementation or parameters used. This transparency is important as subsequent research invariably builds on prior state-of-the-art. Implementing a prior method from a paper, even if clearly written, can be difficult. Furthermore, it is an inefficient use of time for many researchers to re-implement all baseline methods. In an effort to address this, several open source toolboxes have been released for camera-based physiological sensing. These toolboxes have been a significant contribution to the community and provide implementations of methods and models [17, 18, 19]; however, they are also incomplete. McDuff and Blackford $[17]^{3}$ implemented a set of source separation methods (Green, ICA, CHROM, POS) and Pilz [19] published the PPGI-Toolbox $^{4}$ containing implementations of Green, SSR, POS, Local Group Invariance (LGI), Diffusion Process (DP) and Riemannian-PPGI (SPH) models. These toolboxes are implemented in MATLAB (e.g., [17]) and do not contain examples of supervised methods. Python and supervised neural models are now the focus of a large majority of computer vision and deep learning research. There are + +Table 1: Comparison of rPPG Toolboxes. Comparison of rPPG-Toolbox with existing toolboxes in camera-based physiological measurement. + +
ToolboxDataset SupportUnsup. EvalDNN TrainingDNN Eval
iPhys-Toolbox [20]XXX
PPG-I Toolbox [19]XXX
pyVHR [18, 21]X
rPPG-Toolbox (Ours)
+ +Unsup. $=$ Unsupervised learning methods, DNN $=$ Deep neural network methods. + +several implementations of popular signal processing methods in Python: Bob.rrpg.base5 includes implementations of CHROM, SSR and Boccignone et al. [18] released code for Green, CHROM, ICA, LGI, PBV, PCA, and POS. Several published papers have included links to code; however, often this is only inference code and not training code for neural models. Without providing training code for neural networks, it is challenging for researchers to conduct end-to-end reproducible experiments and build on existing research. + +In this paper, we present an end-to-end toolbox for camera-based physiological measurement. This toolbox includes: 1) support for six public datasets, 2) pre-processing code to format the datasets for training neural models, 3) implementations of six neural model architectures and six unsupervised learning methods, 4) evaluation and inference pipelines for supervised and unsupervised learning methods for reproducibility and 5) enabling advanced neural training and inference such as weakly supervised pseudo labels, motion augmentation and multitask learning. We use this toolbox to publish clear and reproducible benchmarks that we hope will provide a foundation for the community to compare methods in a more rigorous and informative manner. + +# 2 Related Work + +In the field of remote PPG sensing, there are three significant open-source toolboxes (documented in Table 1): + +iPhys-Toolbox [17]: An open-sourced toolbox written in MATLAB that is comprised of implementations of numerous algorithms for rPPG sensing. It empowers researchers to present results on their datasets using public, standard implementations of baseline methods, ensuring transparency of parameters. This toolbox incorporates a wide range of widely employed baseline methods; however, it falls short on Python support, public dataset data loaders, and neural network training and evaluation. + +PPG-I Toolbox [19]: This toolbox provides MATLAB implementations, specifically for six unsupervised signal separation models. It incorporates four evaluation metrics, including Pearson correlation, RMSE/MSE, SNR, and Bland-Altman plots. However, similar to the iPhys-Toolbox, it lacks support for public dataset data loading and neural network training and evaluation. + +pyVHR [21]: The most recent in the field, this toolbox adopts Python instead of MATLAB. While it offers ample support for numerous unsupervised methods, its capabilities are limited when it comes to modern neural networks. Notably, pyVHR supports only two neural networks for inference, and none for model training. This omission can be a roadblock for researchers aiming to reproduce and further advance state-of-the-art neural methods. + +# 3 The rPPG-Toolbox + +To address the gaps in the current tooling and to promote reproducibility and clearer benchmarking within the camera-based physiological measurement (rPPG) community, we present an open-source toolbox designed to support six public datasets, six unsupervised methods and six neural methods for data preprocessing, neural model training and evaluation, and further analysis. + +# 3.1 Datasets + +The toolbox includes pre-processing code that converts six public datasets into a form amenable for training with neural models. The standard form for the videos we select includes raw frames and + +![](images/2d1a7ad710308d9a8edb38fe253c5ea11738261c40e48f4f0f9305a9afa6ceb7.jpg) +Figure 2: Overview. An overview of the rPPG-Toolbox codebase. + +difference frames (the difference between each pair of consecutive frames) stored as numpy arrays in a [N, W, H, C] format. Where N is the length of the sequence, W is the width of the frames, H is the height of the frames, and C is the number of channels. There are six channels in this case, as both the raw frames and difference frames account for three color channels each. For faster data loading, all videos in the datasets are typically broken up into several "chunks" of non-overlapping N (e.g., 180) frame sequences. All of these parameters (N, W, H, C) are easy to change and customize. The PPG waveform labels are stored as numpy arrays in a [N, 1] format. The entire pre-processing procedure is supported with multi-thread processing to accelerate the data processing time. + +We have provided pre-processing code for UBFC-rPPG [22], PURE [23] SCAMPS [24],MMPD [25], $\mathrm{BP4D + }$ [26], and UBFC-Phys [27]. Each of these datasets encompasses a diverse array of real-world conditions, capturing variations in factors such as motion, lighting, skin tones/types, and backgrounds, thus presenting robust challenges for any signal processing and machine learning algorithm. Tools (python notebooks) are provided for quickly visualizing pre-processed datasets and will be detailed further in Appendix-J. We also support the pre-processing and usage of augmented versions of the UBFC-rPPG [22] and PURE [23] datasets, a feature which we describe further in Section 4.2. + +UBFC-rPPG [22]: This dataset features RGB videos recorded using a Logitech C920 HD Pro webcam at $30\mathrm{Hz}$ . The videos have a resolution of 640x480, and they are stored in an uncompressed 8-bit RGB format. Reference PPG data was obtained using a CMS50E transmissive pulse oximeter, thereby providing the gold-standard validation data. The subjects were positioned approximately one meter away from the camera during the recording sessions. The videos were captured under indoor conditions with a combination of natural sunlight and artificial illumination. + +PURE [23]: This dataset consists of recordings from 10 subjects, including 8 males and 2 females. The video footage was captured with an RGB eco274CVGE camera from SVS-Vistek GmbH, with a frequency of $30\mathrm{Hz}$ and a resolution of $640\times 480$ . Subjects were positioned approximately 1.1 meters from the camera and were illuminated from the front by ambient natural light filtering through a window. The gold-standard ground truth of PPG and SpO2 were obtained at $60\mathrm{Hz}$ with a CMS50E pulse oximeter affixed to the subject's finger. Each participant completed six recordings under varied motion conditions, thereby contributing to a range of data reflecting different physical states. + +SCAMPS [24]: This dataset encompasses 2,800 video clips, comprising 1.68M frames, featuring synthetic avatars in alignment with cardiac and respiratory signals. These waveforms and videos were generated by employing a sophisticated facial processing pipeline, resulting in high-fidelity, quasi-photorealistic renderings. To provide robust test conditions, the videos incorporate various confounders such as head motions, facial expressions, and changes in ambient illumination. + +MMPD [25]: This dataset includes 660 one-minute videos recorded using a Samsung Galaxy S22 Ultra mobile phone, at 30 frames per second with a resolution of $1280 \times 720$ pixels and then compressed to $320 \times 240$ pixels. The ground truth PPG signals were simultaneously captured using an HKG-07C+ oximeter, at $200 \mathrm{~Hz}$ and then downsampled to $30 \mathrm{~Hz}$ . It contains Fitzpatrick skin types 3-6, four different lighting conditions (LED-low, LED-high, incandescent, natural), four distinct + +activities (stationary, head rotation, talking, and walking), and exercise scenarios. With multiple labels provided, different subsets of this dataset can be easily used for research using our toolbox. + +BP4D+ [26]: This dataset contains video footage captured at a rate of 25 frames per second, for 140 subjects, each participating in 10 emotion-inducing tasks, amounting to a total of 1400 trials and associated videos. In addition to the standard video footage, the dataset also includes 3D mesh models and thermal video, both captured at the same frame rate. Alongside these, the dataset offers supplementary data including blood pressure measurements (wave, systolic, diastolic, mean), heart rate in beats per minute, respiration (wave, rate bpm), electrodermal activity, and Facial Action Coding System (FACS) encodings for specified action units. + +UBFC-Phys [27]: The UBFC-PHYS dataset, a multi-modal dataset, contains 168 RGB videos, with 56 subjects (46 women and 10 men) per a task. There are three tasks with significant amounts of unconstrained motion under static lighting conditions - a rest task, a speech task, and an arithmetic task. The dataset contains gold-standard blood volume pulse (BVP) and electrodermal activity (EDA) measurements that were collected via the Empatica E4 wristband. The videos were recorded at a resolution of $1024 \times 1024$ and $35 \mathrm{~Hz}$ with a EO-23121C RGB digital camera. We utilized all three tasks and the same subject sub-selection list provided by the authors of the dataset in the second supplementary material of Sabour et al. [27] for evaluation. We reiterate this subject sub-selection list in Appendix-H. + +# 3.2 Methods + +# 3.2.1 Unsupervised Methods + +The following methods all use linear algebra and traditional signal processing to recover the estimated PPG signal: 1) Green [28]: the green channel information is used as the proxy for the PPG after spatial averaging of RGB video; 2) ICA [29]: Independent Component Analysis (ICA) is applied to normalized, spatially averaged color signals to recover demixing matrices; 3) CHROM [30]: a linear combination of the chrominance signals obtained from the RGB video are used for estimation; 4) POS [31]: plane-orthogonal-to-the-skin (POS), is a method that calculates a projection plane orthogonal to the skin-tone based on physiological and optical principles. A fixed matrix projection is applied to the spatially normalized, averaged pixel values, which are used to recover the PPG waveform; 5) PBV [32]: a signature, that is determined by a given light spectrum and changes of the blood volume pulse, is used in order to derive the PPG waveform while offsetting motion and other noise in RGB videos; 6) LGI [33]: a feature representation method that is invariant to motion through differentiable local transformations. + +# 3.2.2 Supervised Neural Methods + +The following implementations of supervised learning algorithms are included in the toolbox. All implementations were done using PyTorch [37]. Common optimization algorithms, such as Adam [38] and AdamW [39], and criterion, such as mean squared error (MSE) loss, are utilized for training except for where noted. The learning rate scheduler typically follows the 1cycle policy [40], which anneals the learning rate from an initial learning rate to some maximum learning rate and then, from that maximum learning rate, to some learning rate much lower than the initial learning rate. The total steps in this policy are determined by the number of epochs multiplied by the number of training batches in an epoch. The 1cycle policy allows for convergence due to the learning rate being adjusted well below the initial, maximum learning rate throughout the cycle, and after numerous epochs in which the learning rate is much higher than the final learning rate. We found the 1cycle learning rate scheduler to provide stable results with convergence using a maximum learning rate of 0.009 and 30 epochs. We provide parameters in the toolbox that can enable the visualization of the losses and learning rate changes for both the training and validation phases. Further details on these key visualizations for supervised neural methods are provided in the GitHub repository. + +DeepPhys [4]: A two-branch 2D convolutional attention network architecture. The two representations (appearance and difference frames) are processed by parallel branches with the appearance branch guiding the motion branch via a gated attention mechanism. The target signal is the first differential of the PPG waveform. + +PhysNet [5]: A 3D convolutional network architecture. Yu et al. compared this 3D-CNN architecture with a 2D-CNN + RNN architecture, finding that a 3D-CNN version was able to achieve superior + +Table 2: Benchmark Results. Performance on the UBFC-rPPG [22], PURE [23] UBFC-Phys [27] and MMPD [25] datasets generated using the rPPG toolbox. For the supervised methods we show cross-dataset training results using the UBFC-rPPG, PURE and SCAMPS datasets. + +
Test Set
PURE [23]UBFC-rPPG [22]UBFC-Phys [27]MMPD [25]
MAE↓MAPE↓MAE↓MAPE↓MAE↓MAPE↓MAE↓MAPE↓
UNSUPERVISEDGREEN [28]N/A10.0910.2819.8118.7813.5516.0121.6824.39
ICA [29]N/A4.774.4714.7014.3410.0311.8518.6020.88
CHROM [30]N/A5.7711.523.983.784.496.0013.6615.99
LGI [33]N/A4.614.9615.8014.706.277.8317.0818.98
PBV [32]N/A3.914.8215.9015.1712.3414.6317.9520.18
POS [31]N/A3.677.254.003.864.516.1212.3614.43
SUPERVISEDTS-CAN [6]UBFC-RPPG3.693.38N/AN/A5.136.5314.0015.47
PUREN/AN/A1.291.505.727.3413.9315.14
SCAMPS4.665.833.623.535.556.9119.0521.77
PHYSNET [5]UBFC-RPPG8.0613.67N/AN/A5.797.699.4711.11
PUREN/AN/A0.981.124.786.1513.9315.61
SCAMPS13.3020.015.405.438.5311.2220.7824.43
UBFC-RPPG12.9223.92N/AN/A6.638.9112.115.41
PHYSFORMER [34]PUREN/AN/AN/A1.441.666.047.6714.5716.73
SCAMPS26.5842.794.565.1811.9115.5722.6927.0627.06
UBFC-RPPG5.545.32N/AN/A6.628.2117.4919.2619.26
DEEPPHYS [35]PUREN/AN/A1.211.428.4210.1816.9218.5418.54
SCAMPS3.954.253.103.084.755.8915.2216.5616.56
UBFC-RPPG5.475.39N/AN/A4.936.2513.7815.1515.15
EFF.PHYS-C [36]PUREN/AN/A2.072.105.316.6114.0315.3115.31
SCAMPS10.2411.7012.6411.266.978.4720.4123.5223.52
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (%). + +performance. Therefore, we included the 3D-CNN in this case. Instead of an MSE loss, a negative Pearson loss is utilized. It is worth noting that we used difference-normalized frames as input to PhysNet as the original paper does not specify a concrete input data format. Additional experiments involving raw frame inputs are included in Appendix-D. + +PhysFormer [34]: PhysFormer is a video transformer-based architecture that adaptively aggregates both local and global spatio-temporal features toward rPPG representation enhancement. The architecture ultimately incorporated and emphasized long-term, global features and allowed for notable improvements in performance relative to various methods, including POS [31], DeepPhys [4], and PhysNet [5]. Instead of an MSE loss, a dynamic loss composed of numerous hyperparameters, a negative Pearson loss, a frequency cross-entropy loss, and a label distribution loss is used. Furthermore, the chosen learning rate scheduler differs from our toolbox's default learning rate scheduler (1cycle policy [40]) in that we effectively opt to use a single, constant learning rate for training. We utilize difference-normalized frames as an input to PhysFormer. + +TS-CAN [6]: A two-branch 2D convolutional attention network architecture that leverages temporal shift operation information across the time axis to perform efficient temporal and spatial modeling. This network is an on-device, real-time algorithm. The target signal is the first differential of the PPG waveform. + +EfficientPhys-C [36]: A single-branch 2D convolutional neural network that aims to provide an end-to-end, super lightweight network for real-time on-device computation. The architecture has a normalization module that calculates frame differences and learnable normalization, as well as a self-attention module to help the network focus on skin pixels associated with PPG signal. + +# 3.3 Pre-Processing, Training, Post-Processing and Evaluation + +In the rPPG-Toolbox, we offer a configuration file system that enables users to modify all parameters used in pre-processing, training, post-processing, and evaluation. A YAML file is provided for every experiment and includes blocks for pre/post-processing, training, validation, testing, model + +hyperparameters, and computational resources. The pre/post-processing for neural and unsupervised methods share similar settings, such as the same input resolution and face cropping. + +In terms of pre-processing, we provide three input data types: 1) "DiffNormalized", which calculates the difference of every two consecutive frames and labels, and normalizes them by their standard deviation; 2) "Standardized", which standardizes the raw frames and labels using z-score; 3) "Raw", which uses the original frames and labels without modification. Additionally, we provide parameters for face cropping, a vital aspect of our task. In the config file, users can use dynamic detection to perform face cropping every N frames and scale the face bounding box by a coefficient to maintain consistency of face cropping in motion videos. Users can also elect to use a median bounding box with dynamic detection in order to help filter out erroneous detections of the face. + +With regard to the training of neural networks, our toolbox provides flexibility to parameterize which portion of the data is used for training, validation, or testing. For instance, we can use the first $80\%$ of UBFC-rPPG for training, the last $20\%$ of UBFC-rPPG for validation and then use the entire PURE dataset for testing. Moreover, the distinct parameters (e.g., dropout rate) of each neural network can be defined in the config file. + +For post-processing and evaluation, there are several standard post-processing steps that are typically employed to improve model predictions. A 2nd-order Butterworth filter (cut-off frequencies of 0.75 and $2.5\mathrm{Hz}$ ) is applied to filter the predicted PPG waveform. The choice of filtering parameters can have a significant impact on downstream results such as heart rate errors. A Fast Fourier Transform or a peak detection algorithm is then applied to the filtered signal to calculate the heart rate. In this toolbox, we support five metrics for video-level heart rate estimations: mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), signal-noise ratio (SNR), and Pearson Correlation $(\rho)$ , along with a calculation of standard error for a better understanding of the accuracy of the aforementioned metrics. We also give users the option to visualize Bland-Altman plots as a part of evaluation. Finer details on the supported metrics (F), metric results not reported in the main paper (G), and Bland-Altman plots (J.3) appear in the respective appendices. For better reproducibility, we also provide pre-trained models in our Github repository to allow researchers to perform model inference. The detailed definition of each config parameter is also provided in the GitHub repository. + +# 3.4 Benchmarking + +To show that the implementations of the baseline methods are functioning as expected and to provide benchmark results for consumers of the toolbox to reference and reproduce, we performed a set of baseline experiments using three commonly used video rPPG datasets for training: SCAMPS [24], UBFC-rPPG [22] and PURE [23] and tested on four datasets including UBFC-rPPG [22], PURE [23], UBFC-Phys [27], and MMPD [25]. Except where noted in the GitHub repository, neural models utilized a training batch size of 4, 30 epochs, and an inference batch size of 4 for all experiments. Due to the multi-institution team behind this toolbox, different kinds of GPUs were utilized to produce benchmark results. Tables 2 and 4 were produced on a machine using a single NVIDIA RTX A4500 GPU. Tables 3 and 5 were produced on a machine using a single NVIDIA GeForce 2080 Ti GPU. As illustrated in Table 2, we show MAE and MAPE computed between the video-level heart rate estimations and gold standard measurements. Additional cross-dataset experiment results and metric results can be found in Appendix-G and F. + +# 4 Additional Features + +# 4.1 Weakly Supervised Training + +Supervised rPPG training requires high fidelity synchronous PPG waveform labels. However not all datasets contain such high quality labels. In these cases we offer the option to train on synchronous PPG "pseudo" labels derived through a signal processing methodology as described by [41]. These labels are produced through POS-generated [31] PPG waveforms, which are then bandpass filtered around the normal heart-rate frequencies (cut-off frequencies of 0.70 and $3.0\mathrm{Hz}$ ), and finally amplitude normalized using a Hilbert-signal envelope. The tight filtering and envelope normalization results in a strong periodic proxy signal, but at the cost of limited signal morphology. + +For instance, in the BP4D+ dataset [26], the cardiac ground truth is represented by a blood pressure waveform. Although this waveform exhibits the same periodicity as the PPG signal, it has a phase shift that adversely affects model training. A figure that illustrates sample pseudo labels derived for BP4D+ [26] videos plotted against the ground truth blood pressure waveform as shown in Figure 3. Table 3 presents results for supervised methods, trained on BP4D+ [26] pseudo labels. We extend this feature to all of the supported datasets. + +Table 3: Training with Pseudo Labels. For the supervised methods we show results training with the (entire) BP4D+ [26] dataset, using POS [31] derived pseudo training labels. + +
Training SetBP4D+[26] with POS Pseudo Labels
Testing SetUBFC-rPPG [22]PURE [23]
MAE↓MAPE↓ρ↑MAE↓MAPE↓ρ↑
Supervised
TS-CAN [6]4.694.510.781.291.600.97
PhysNet(Normalized) [5]1.781.920.963.697.350.88
DeepPhys [35]2.742.810.932.472.490.89
EfficientPhys-C [36]2.432.520.903.593.270.80
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (\%), $\rho =$ Pearson Correlation in HR estimation. + +# 4.2 Motion Augmented Training + +The usage of synthetic data in the training of machine learning models for medical applications is becoming a key tool that warrants further research [42]. In addition to providing support for the fully synthetic dataset SCAMPS [24], we provide provide support for synthetic, motion-augmented versions of the UBFC-rPPG [22], PURE [23], SCAMPS [24], and UBFC-PHYS [27] datasets for further exploration toward the use of synthetic data for training rPPG models. The synthetic, motion-augmented datasets are generated using an open-source motion augmentation pipeline targeted for increasing motion diversity in rPPG videos [43]. We present cross-dataset results using a motion-augmented version of the UBFC-rPPG [22] dataset in Table 4. We also provide tools that leverage OpenFace [44] for extracting, visualizing, and analyzing motion in rPPG video datasets. Further details regarding these tools are shared in our GitHub repository. + +Table 4: Training with Motion-Augmented Data. We demonstrate results training on a motion-augmented (MA) version of the UBFC-rPPG [22] dataset generated using an open-source motion augmentation pipeline [43] and testing on the unaugmented version of the PURE [23] dataset. + +
Training Set +Testing SetPURE [23]MAUBFC-rPPG [22]MMPD [25]
MAE↓MAPE↓ρ↑MAE↓MAPE↓ρ↑MAE↓MAPE↓ρ↑
TS-CAN [6]1.071.200.975.036.360.7512.5913.770.23
PhysNet (Normalized) [5]17.0332.370.385.517.500.6810.6713.990.33
DeepPhys [35]1.151.400.974.956.260.7512.7113.700.21
EfficientPhys-C [36]2.592.670.884.806.100.7913.3914.500.14
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (%), $\rho =$ Pearson Correlation in HR estimation. + +# 4.3 Extending the rPPG-Toolbox for Physiological Multitasking + +While this toolbox is primarily targeted towards rPPG model training and evaluation, it can be easily extended to support multi-tasking of physiological signals. As an example, we implement BigSmall [41], an architecture that multi-tasks PPG, respiration, and facial action. Similar to [41] we present 3-fold cross-validation results across the action unit (AU) subset of BP4D+ [26] (the portion of the dataset with AU labels), and use the same subject folds and hyperparameters as implemented in the original publication. These results can be found in Table 5. Note, that like [41], facial action metrics are calculated across 12 common AUs (AU #s 1, 2, 4, 6, 7, 10, 12, 14, 15, 17, 23, 24). Additional details of training and evaluation could be found in Appendix-I. + +![](images/ef43d049692003333c4369c094857012b4433b82386a5629250fb28036a263ec.jpg) + +![](images/6b703304e0cf97ac485aa803bd8728ecfc5defa837587a0b1239437c89ddc009.jpg) +Time (Seconds) + +![](images/dc076b3d356ad6e5ff51bef2dd999b3078a82dfa7204e8ef66031a45487a479c.jpg) +Figure 3: Generated Pseudo Labels. Samples of POS [31] generated PPG pseudo labels plotted against ground truth blood pressure waveforms from BP4D+ [26]. + +Table 5: Multitasking Results. For the BigSmall [41] method we show results for multi-tasking PPG, respiration, and action unit classification; training with the BP4D+ [26] (AU subset) dataset, using POS [31] derived pseudo training PPG labels. + +
Training SetBP4D+ [26]
Testing SetBP4D+ [26]
TaskrPPGRespirationFacial Action
MAE↓MAPE↓ρ↑MAE↓MAPE↓ρ↑F1↑Prec. ↑Acc. ↑
BigSmall [41]3.233.510.835.1926.280.1442.8239.8565.73
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error $(\%)$ , $\rho$ = Pearson Correlation in HR estimation, F1 = average F1 across 12 action units, Prec. = average precision across 12 action units, Acc. = average accuracy across 12 action units. + +# 4.4 Training, Evaluation and Analysis Features + +In this toolbox, we have incorporated a diverse set of training and evaluation functionalities. These include: 1) data pre-processing visualization tools, enabling users to inspect and understand their data before feeding it into the model; 2) comprehensive tracking and visualization of key training metrics such as training loss, validation loss, and learning rate, facilitating a thorough monitoring of the model's learning progress; 3) the implementation of Bland-Altman plots, providing a robust method for assessing agreement between two ground-truth HR and predicted HR; 4) advanced motion analysis capabilities. For an in-depth exploration of these features, we direct the reader to Appendix-J and our Github page, where detailed descriptions and usage examples are provided. + +# 5 Limitations + +We acknowledge there are many limitations in our current toolbox and plan to continue improve it in the future. In the ensuing phases of our research, we envision a collaborative approach, working in concert with the wider scientific community, to enhance the efficacy and capabilities of the rPPG-Toolbox. The current limitations include 1) this toolbox does not support all of the latest neural architectures and diverse datasets; 2) it does not support unsupervised and self-supervised learning paradigms; 3) it does not support applications beyond of heart rate calculation such as heart rate variability, blood pressure, $\mathrm{SpO}_2$ , and other important physiological measures. + +# 6 Broader Impacts + +Camera sensing has advantages and benefits with the potential to make important cardiac measurement more accessible and comfortable. One of the motivating use-cases for rPPG is turning everyday devices equipped with cameras into scalable health sensors. However, pervasive measurement can also feel intrusive. We are releasing the rPPG toolbox with a Responsible AI License [45] that + +restricts negative and unintended uses of the toolbox. We also acknowledge the presence of several potential negative concerns and impacts, which are described as follows. + +Privacy Concerns: Camera-based physiological sensing offers a revolutionary way to extract physiological signals from video recordings, enabling a myriad of applications, from remote patient monitoring to daily fitness tracking. However, these advancements come with significant privacy concerns. First and foremost, the very nature of remote sensing allows for the collection of personal data without direct physical interaction or, in some cases, knowledge of the individual being monitored. This can potentially enable unauthorized entities to capture sensitive physiological data covertly. Furthermore, as these systems become more widespread, there is a risk that everyday places such as shopping malls, public transport, and even workplaces might employ rPPG systems, leading to widespread passive data collection. Such extensive monitoring can lead to privacy invasions, where individuals are constantly under physiological surveillance without their explicit consent. + +Potential Negative Impact: Beyond individual privacy, there are broader societal implications of widespread contactless camera-based physiological sensing. There's the potential for the creation of a pervasive surveillance state where citizens are continuously monitored, not just for their actions but also for their physiological responses. Such monitoring can lead to "physiological profiling," where individuals are judged, categorized, or even discriminated against based on their bodily responses. For instance, elevated heart rates or other physiological markers might be misinterpreted as signs of nervousness, guilt, or deceit, potentially affecting decision-making in areas such as law enforcement, job interviews, or public services. Moreover, a continuous emphasis on physiological metrics might foster an environment of physiological conformism, where people feel pressured to exhibit 'normal' physiological signs even if they aren't feeling well or are under duress. + +Potential Ethical Concerns: The ethical implications of rPPG are multi-faceted. Firstly, there is the concern of informed consent. As technology becomes more integrated into our environments, it becomes challenging to ensure that individuals are aware of, and have agreed to, the collection of their physiological data. Moreover, the accuracy and reliability of rPPG systems can vary depending on factors like skin tone, lighting conditions, and other external factors. This introduces the risk of systematic biases, where certain groups might be inaccurately assessed or marginalized due to technological limitations or inherent biases in the algorithms. Ethical concerns also arise from potential misuse. For example, businesses might use rPPG data to gauge consumer reactions to products or advertisements, leading to manipulative strategies that target individual vulnerabilities. In extreme cases, authoritarian regimes might use such technologies to monitor citizens for signs of dissent or unrest. As with any potent tool, the ethical application of rPPG requires careful consideration of its potential for both benevolent and malevolent use. + +# 7 Conclusion + +Research relies on the sharing of ideas, this not only allows methods to be verified, saving time and resources, but also allows researchers to more effectively build upon existing work. Without these resources and open-sourced code bases, fair evaluation and comparison of methods is difficult, creates needless repetitions, and wastes resources. We present an end-to-end and comprehensive toolbox, called rPPG-Toolbox, containing code for pre-processing multiple public datasets, implementations of supervised machine learning (including training pipeline) and unsupervised methods, and post-processing and evaluation tools. + +# 8 Acknowledgement + +This project is supported by a Google PhD Fellowship for Xin Liu and a research grant from Cisco for the University of Washington as well as a career start-up funding grant from the Department of Computer Science at UNC Chapel Hill. This research is also supported by Tsinghua University Initiative Scientific Research Program, Beijing Natural Science Foundation, and the Natural Science Foundation of China (NSFC). + +# References + +[1] Lonneke AM Aarts, Vincent Jeanne, John P Cleary, C Lieber, J Stuart Nelson, Sidarto Bambang Oetomo, and Wim Verkruysse. Non-contact heart rate monitoring utilizing camera photoplethysmography in the neonatal intensive care unit—a pilot study. *Early human development*, 89(12):943–948, 2013. +[2] L Tarassenko, M Villarroel, A Guazzi, J Jorge, DA Clifton, and C Pugh. Non-contact videobased vital sign monitoring using ambient light and auto-regressive models. Physiological measurement, 35(5):807, 2014. +[3] Bryan P Yan, William HS Lai, Christy KY Chan, Alex CK Au, Ben Freedman, Yukkee C Poh, and Ming-Zher Poh. High-throughput, contact-free detection of atrial fibrillation from video with deep learning. JAMA cardiology, 5(1):105-107, 2020. +[4] Weixuan Chen and Daniel McDuff. DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks. arXiv:1805.07888 [cs], August 2018. arXiv: 1805.07888. +[5] Zitong Yu, Wei Peng, Xiaobai Li, Xiaopeng Hong, and Guoying Zhao. Remote heart rate measurement from highly compressed facial videos: an end-to-end deep learning solution with video enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 151-160, 2019. +[6] Xin Liu, Josh Fromm, Shwetak Patel, and Daniel McDuff. Multi-task temporal shift attention networks for on-device contactless vitals measurement. Advances in Neural Information Processing Systems, 33:19400-19411, 2020. +[7] Zitong Yu, Xiaobai Li, Pichao Wang, and Guoying Zhao. Transrppg: Remote photoplethysmography transformer for 3d mask face presentation attack detection. IEEE Signal Processing Letters, 2021. +[8] Daniel McDuff, Javier Hernandez, Erroll Wood, Xin Liu, and Tadas Baltrusaitis. Advancing non-contact vital sign measurement using synthetic avatars. arXiv preprint arXiv:2010.12949, 2020. +[9] Zhen Wang, Yunhao Ba, Pradyumna Chari, Oyku Deniz Bozkurt, Gianna Brown, Parth Patwa, Niranjan Vaddi, Laleh Jalilian, and Achuta Kadambi. Synthetic generation of face videos with plethysmograph physiology. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20587-20596, 2022. +[10] Xin Liu, Yuntao Wang, Sinan Xie, Xiaoyu Zhang, Zixian Ma, Daniel McDuff, and Shwetak Patel. Mobilephys: Personalized mobile camera-based contactless physiological sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 6(1), mar 2022. +[11] Xin Liu, Ziheng Jiang, Josh Fromm, Xuhai Xu, Shwetak Patel, and Daniel McDuff. Metaphys: few-shot adaptation for non-contact physiological measurement. In Proceedings of the Conference on Health, Inference, and Learning, pages 154–163, 2021. +[12] Xin Liu, Mingchuan Zhang, Ziheng Jiang, Shwetak Patel, and Daniel McDuff. Federated remote physiological measurement with imperfect data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2155-2164, 2022. +[13] Zhaodong Sun and Xiaobai Li. Contrast-phys: Unsupervised video-based remote physiological measurement via spatiotemporal contrast. In European Conference on Computer Vision, pages 492–510. Springer, 2022. +[14] John Gideon and Simon Stent. The way to my heart is through contrastive learning: Remote photoplethysmography from unlabelled video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3995-4004, 2021. +[15] Jeremy Speth, Nathan Vance, Patrick Flynn, and Adam Czajka. Non-contrastive unsupervised learning of physiological signals from video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14464–14474, 2023. + +[16] Yuzhe Yang, Xin Liu, Jiang Wu, Silviu Borac, Dina Katabi, Ming-Zher Poh, and Daniel McDuff. Simper: Simple self-supervised learning of periodic targets. arXiv preprint arXiv:2210.03115, 2022. +[17] Daniel McDuff and Ethan Blackford. iphs: An open non-contact imaging-based physiological measurement toolbox. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 6521-6524. IEEE, 2019. +[18] Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Alessandro d'Amelio, Giuliano Grossi, and Raffaella Lanzarotti. An open framework for remote-ppg methods and their assessment. IEEE Access, 8:216083-216103, 2020. +[19] Christian Pilz. On the vector space in photoplethysmography imaging. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0–0, 2019. +[20] Daniel McDuff, Sarah Gontarek, and Rosalind W. Picard. Remote Detection of Photoplethysmographic Systolic and Diastolic Peaks Using a Digital Camera. IEEE Transactions on Biomedical Engineering, 61(12):2948-2954, December 2014. +[21] Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Alessandro D'Amelio, Giuliano Grossi, Raffaella Lanzarotti, and Edoardo Mortara. pyvhr: a python framework for remote photoplethysmography. PeerJ Computer Science, 8:e929, 2022. +[22] Serge Bobbia, Richard Macwan, Yannick Benezeth, Alamin Mansouri, and Julien Dubois. Unsupervised skin tissue segmentation for remote photoplethysmography. Pattern Recognition Letters, 124:82-90, 2019. +[23] Ronny Stricker, Steffen Müller, and Horst-Michael Gross. Non-contact video-based pulse rate measurement on a mobile service robot. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pages 1056-1062. IEEE, 2014. +[24] Daniel McDuff, Miah Wander, Xin Liu, Brian L Hill, Javier Hernandez, Jonathan Lester, and Tadas Baltrusaitis. Scamps: Synthetics for camera measurement of physiological signals. arXiv preprint arXiv:2206.04197, 2022. +[25] Jiankai Tang, Kequan Chen, Yuntao Wang, Yuanchun Shi, Shwetak Patel, Daniel McDuff, and Xin Liu. Mmpd: Multi-domain mobile video physiology dataset, 2023. +[26] Zheng Zhang, Jeff M Girard, Yue Wu, Xing Zhang, Peng Liu, Umur Ciftci, Shaun Canavan, Michael Reale, Andy Horowitz, Huiyuan Yang, et al. Multimodal spontaneous emotion corpus for human behavior analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3438-3446, 2016. +[27] Rita Meziati Sabour, Yannick Benezeth, Pierre De Oliveira, Julien Chappe, and Fan Yang. Ubfc-phys: A multimodal database for psychophysiological studies of social stress. IEEE Transactions on Affective Computing, 2021. +[28] Wim Verkruysse, Lars O Svaasand, and J Stuart Nelson. Remote plethysmographic imaging using ambient light. Optics express, 16(26):21434-21445, 2008. +[29] Ming-Zher Poh, Daniel J McDuff, and Rosalind W Picard. Advancements in noncontact, multiparameter physiological measurements using a webcam. IEEE transactions on biomedical engineering, 58(1):7-11, 2010. +[30] Gerard De Haan and Vincent Jeanne. Robust pulse rate from chrominance-based rppg. IEEE Transactions on Biomedical Engineering, 60(10):2878-2886, 2013. +[31] Wenjin Wang, Albertus C den Brinker, Sander Stuijk, and Gerard de Haan. Algorithmic principles of remote ppg. IEEE Transactions on Biomedical Engineering, 64(7):1479-1491, 2016. +[32] Gerard De Haan and Arno Van Leest. Improved motion robustness of remote-ppg by using the blood volume pulse signature. Physiological measurement, 35(9):1913, 2014. + +[33] Christian S Pilz, Sebastian Zaunseder, Jarek Krajewski, and Vladimir Blazek. Local group invariance for heart rate estimation from face videos in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 1254–1262, 2018. +[34] Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Philip HS Torr, and Guoying Zhao. Physformer: Facial video-based physiological measurement with temporal difference transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4186-4196, 2022. +[35] Weixuan Chen and Daniel McDuff. Deepphys: Video-based physiological measurement using convolutional attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 349-365, 2018. +[36] Xin Liu, Brian Hill, Ziheng Jiang, Shwetak Patel, and Daniel McDuff. Efficientphys: Enabling simple, fast and accurate camera-based cardiac measurement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5008-5017, 2023. +[37] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pages 8026-8037, 2019. +[38] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[39] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. +[40] Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multi-domain operations applications, volume 11006, pages 369-386. SPIE, 2019. +[41] Girish Narayanswamy, Yujia Liu, Yuzhe Yang, Chengqian Ma, Xin Liu, Daniel McDuff, and Shwetak Patel. Bigsmall: Efficient multi-task learning for disparate spatial and temporal physiological measurements. arXiv preprint arXiv:2303.11573, 2023. +[42] Daniel McDuff, Theodore Curran, and Achuta Kadambi. Synthetic data in healthcare. arXiv preprint arXiv:2304.03243, 2023. +[43] Akshay Paruchuri, Xin Liu, Yulu Pan, Shwetak Patel, Daniel McDuff, and Soumyadip Sengupta. Motion matters: Neural motion transfer for better camera physiological sensing. arXiv preprint arXiv:2303.12059, 2023. +[44] Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 59–66. IEEE, 2018. +[45] Danish Contractor, Daniel McDuff, Julia Katherine Haines, Jenny Lee, Christopher Hines, Brent Hecht, Nicholas Vincent, and Hanlin Li. Behavioral use licensing for responsible ai. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 778-788, 2022. +[46] Vladimir Blazek, Ting Wu, and Dominik Hoelscher. Near-infrared ccd imaging: Possibilities for noninvasive and contactless 2d mapping of dermal venous hemodynamics. In Optical Diagnostics of Biological Fluids V, volume 3923, pages 2-9. International Society for Optics and Photonics, 2000. +[47] Chihiro Takano and Yuji Ohta. Heart rate measurement based on a time-lapse image. Medical engineering & physics, 29(8):853-857, 2007. +[48] Wim Verkruysse, Lars O Svaasand, and J Stuart Nelson. Remote plethysmographic imaging using ambient light. Opt. Express, 16(26):21434-21445, Dec 2008. +[49] Ming-Zher Poh, Daniel J McDuff, and Rosalind W Picard. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Optics express, 18(10):10762-10774, 2010. + +[50] Antony Lam and Yoshinori Kuno. Robust heart rate measurement from video using select random patches. In Proceedings of the IEEE International Conference on Computer Vision, pages 3640-3648, 2015. +[51] Shuchang Xu, Lingyun Sun, and Gustavo Kunde Rohde. Robust efficient estimation of heart rate pulse from video. Biomedical optics express, 5(4):1124-1135, 2014. +[52] Qi Zhan, Wenjin Wang, and Gerard de Haan. Analysis of cnn-based remote-ppg to understand limitations and sensitivities. Biomedical Optics Express, 11(3):1268–1283, 2020. +[53] Eugene Lee, Evan Chen, and Chen-Yi Lee. Meta-rppg: Remote heart rate estimation using a transductive meta-learner. In European Conference on Computer Vision, pages 392–409. Springer, 2020. +[54] Xuesong Niu, Shiguang Shan, Hu Han, and Xilin Chen. Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation. IEEE Transactions on Image Processing, 29:2409-2423, 2019. +[55] Xuesong Niu, Zitong Yu, Hu Han, Xiaobai Li, Shiguang Shan, and Guoying Zhao. Video-based remote physiological measurement via cross-verified feature disentangling. In European Conference on Computer Vision, pages 295–310. Springer, 2020. +[56] Rencheng Song, Huan Chen, Juan Cheng, Chang Li, Yu Liu, and Xun Chen. Pulsegan: Learning to generate realistic pulse waveforms in remote photoplethysmography. IEEE Journal of Biomedical and Health Informatics, 25(5):1373-1384, 2021. +[57] Hao Lu, Hu Han, and S Kevin Zhou. Dual-gan: Joint bvp and noise modeling for remote physiological measurement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12404-12413, 2021. +[58] Yassine Ouzar, Djamaleddine Djeldjli, Frédéric Bousefsaf, and Choubela Maaoui. X-ippgnet: A novel one stage deep learning architecture based on depthwise separable convolutions for video-based pulse rate estimation. Computers in Biology and Medicine, 154:106592, 2023. +[59] Xudong Tan, Menghan Hu, Guangtao Zhai, Yan Zhu, Wenfang Li, and Xiao-Ping Zhang. Lightweight video-based respiration rate detection algorithm: An application case on intensive care. IEEE Transactions on Multimedia, 2023. +[60] Hailan Kuang, Can Ao, Xiaolin Ma, and Xinhua Liu. Shuffle-rppgnet: Efficient network with global context for remote heart rate variability measurement. IEEE Sensors Journal, 2023. +[61] Joaquim Comas, Adria Ruiz, and Federico Sukno. Efficient remote photoplethysmography with temporal derivative modules and time-shift invariant loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2182-2191, 2022. +[62] Hao Wang, Euijoon Ahn, and Jinman Kim. Self-supervised learning framework for remote heart rate estimation using spatiotemporal augmentation. arXiv preprint arXiv:2107.07695, 2021. +[63] Wei-Hao Chung, Cheng-Ju Hsieh, Sheng-Hung Liu, and Chiou-Ting Hsu. Domain generalized rppg network: Disentangled feature learning with domain permutation and domain augmentation. In Proceedings of the Asian Conference on Computer Vision, pages 807-823, 2022. +[64] Hao Lu, Zitong Yu, Xuesong Niu, and Ying-Cong Chen. Neuron structure modeling for generalizable remote physiological measurement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18589-18599, 2023. +[65] Fabian Schrumpf, Patrick Frenzel, Christoph Aust, Georg Osterhoff, and Mirco Fuchs. Assessment of deep learning based blood pressure prediction from ppg and rppg signals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3820-3830, 2021. + +[66] Theodore Curran, Xin Liu, Daniel McDuff, Shwetak Patel, and Eugene Yang. Camera-based remote photoplethysmography to measure heart rate and blood pressure in ambulatory patients with cardiovascular disease: Preliminary analysis. Journal of the American College of Cardiology, 81(8_Supplement):2301-2301, 2023. +[67] Ying-Ying Zheng, Yi-Tong Ma, Jin-Ying Zhang, and Xiang Xie. Covid-19 and the cardiovascular system. Nature Reviews Cardiology, 17(5):259–260, 2020. +[68] Mauricio Villarroel, Sitthichok Chaichulee, João Jorge, Sara Davis, Gabrielle Green, Carlos Arteta, Andrew Zisserman, Kenny McCormick, Peter Watkinson, and Lionel Tarassenko. Noncontact physiological monitoring of preterm infants in the neonatal intensive care unit. npj Digital Medicine, 2(1):1-18, 2019. + +# A Overview of Appendices + +Our appendices contain the following additional details and results: + +- In Section B, we provide details regarding the background of rPPG and an overview of existing methods. +- In Section C, we provide an overview of potential applications out of rPPG technologies. +- In Section D, we provide additional results about PhysNet. +- In Section E, we provide an overview of rPPG network recommendations. +- In Section F we provide details toward metrics supported by our toolbox. We also provide additional metric results in Section G that were not included in the main paper due to space constraints. +- Section H briefly details which subjects we utilized for exclusion, or conversely sub-selection, in each task when dealing with the UBFC-Phys [27] dataset. We also briefly describe video filtering criteria available via the toolbox and useful for subject sub-selection. +- Additional details related to training and evaluation for physiological multitasking is shared in Section I. +- Section J briefly describes additional features included in the toolbox. These features, including pre-processed data visualization, loss and learning visualization, Bland-Altman plots, and motion analysis, are further detailed with exemplar usage in the rPPG-Toolbox's GitHub repo: https://github.com/ubicomplab/rPPG-Toolbox + +# B Background and Existing Research in rPPG + +# B.1 Background + +![](images/fd6206d80e973ae6ed748c8a5d9d9e05dd907dcdb47dd20344c22f0c046ce243.jpg) +Figure 4: The principles behind camera-based physiological sensing. The volumetric changes of blood under the surface of the skin cause changes in light absorption and reflection, which is the source of PPG signal. + +Remote PPG (rPPG) measurement involves the development of computational methods for extracting physiological parameters (e.g., pulse rate, respiration rate, blood oxygenation, blood pressure) based on light reflected from, or transmitted through, the human body. Essentially these methods use pixel information to quantify changes in visible light, or other electromagnetic radiation (e.g., infrared or thermal), that are modulated by blood flow in the periphery of the skin (see Fig.4). This reflected radiation is also affected by body motions and absorption characteristics of the skin [46, 47, 48, 49, 50, 51, 31]. In this article, we will focus primarily on the use of visible light, due to the ubiquitous nature of RGB cameras. + +As visible light penetrates between 4 to $5\mathrm{mm}$ below the skin's surface, it is modulated by the volume of oxygenated and deoxygenated hemoglobin enabling the measurement of the peripheral blood + +volume pulse (BVP). The frequency channels offered by multiband (e.g., RGB) cameras enable the composition of blood, including the oxygen saturation to be measured. In addition, these pixels are affected by the motion as a person breaths in and out and by the mechanical effects of the heart beating, enabling the measurement of breathing signals and the ballistocardiogram (BCG). Analyzing the morphology of these signals, and combining them together, offers the possibility of measuring correlates of blood pressure. + +# B.2 Optical Principals + +The Lambert-Beer law (LBL) [50, 51, 35] and Shafer's dichromatic reflection model (DRM) [31] are two models which provide a framework for capturing the effects of an imager, lighting, body motions and physiological processes on recorded pixel intensities. Given the optical characteristics of oxygenated and deoxygenated blood, we also have priors on the wavelengths of light that contain the strongest or weakest pulsatile information. This prior knowledge is important for measuring physiological parameters accurately. Most computational methods are built upon this grounding. + +# B.3 Algorithms + +Many computational approaches for recovering physiological signals from videos have similar steps. The first step typically involves localizing a region of interest within each video frame. In a large majority of cases the face or head are the region of interest and therefore facial detection and/or landmark detection are used. However, in other cases skin segmentation might be preferred. Aggregating pixels spatially is a subsequent step that has been used to help to reduce noise from camera quantization errors. The operation can be performed by downsampling an image [35] or simply averaging all pixel values with a region of interest [49]. Many cameras capture frames from more than one frequency band (e.g., RGB) that provide complementary measurements to capture different properties of the light reflected from the body. This information can be used in two ways: 1) for understanding the composition of the blood (e.g., oxygen saturation), 2) improving the signal-to-noise ratio of the recovered blood volume pulse. Typically, computational methods leverage multiple bands and learn a linear or non-linear signal decomposition to estimate the pulse waveform. This manipulation of the color channel signals can be grounded in the optical properties of the skin [31, 30] or learned in a data-driven manner given a specialized learning criteria or loss function. + +More recently supervised machine learning has become the most popular approach. Specifically, deep learning and convolutional neural networks provide the current state-of-the-art results. These methods present the opportunity for more "end-to-end" learning and researchers have gradually tried to replace handcrafted processing steps with learnable components. Since the relationship between underlying physiological signal and skin pixels in a video is complicated, deep neural networks have shown superior performance on modeling such non-linear relationship compared to traditional source separation methods [35, 5, 52, 53, 6, 54, 55, 56, 57, 34]. Moreover, due to the flexibility of neural network, researchers have also explored neural based methods for real-time on-device inference [6, 36, 58, 59, 60, 61], self-supervised learning [14, 53, 62, 15, 13, 16], domain generalization [11, 63, 64] and estimating new vital measurement such as blood pressure [65, 66]. + +# C Potential Applications of rPPG + +The SARS-CoV-2 (COVID-19) pandemic has accelerated the pace of change in healthcare services. In particular, how healthcare services are delivered around the world has needed to be rethought in the presence of new risks to patients and providers and restrictions on travel. The virus has been linked to the increased risk of cardiopulmonary (heart and lung related) illness with symptoms such as respiratory distress syndrome, myocarditis, and the associated chronic damage to the cardiovascular system. Experts suggest that particular attention should be given to cardiovascular protection during treatment of COVID-19 [67]. While measurement is not the sole solution to these problems, they have acutely highlighted the need for scalable and accurate physiological monitoring. Ubiquitous or pervasive health sensing technology could help patients conduct daily screenings, monitor the effects of medication on their symptoms, and help clinicians make more informed and accurate decisions. + +The potential advantages that video-based contactless measurement offers have helped to draw a significant amount of attention to the field in recent years. Contact biomedical sensors (e.g., + +electrocardiograms, pulse oximeters) are the standard used for clinical screening and at-home measurement. However, these devices are usually bulky and are still not ubiquitously available, especially in low-resource settings. On the other hand, non-contact camera-based physiological sensing presents a new opportunity for highly scalable and low-cost physiological monitoring through ordinary cameras (e.g., webcams or smartphone cameras) [29]. Besides the convenience and potential scalability, this technology could also reduce the risk of infection for vulnerable patients and discomfort caused by obtrusive leads and electrodes [68]. Finally, we believe there are two specifically compelling advantages of cameras over contact sensors. The first, is that they can capture multi-modal signals, including but not limited to, the activity of the subject, their appearance, facial expressions and gestures, motor control and context. One reason this helps is that the physiological measurements can be interpreted in context. For example, if someone appears in pain, an elevated heart rate can be interpreted differently than without in pain. Secondly, cameras are spatial sensors allowing for the measurement of signals from multiple parts of the body to be measured concomitantly, presenting greater opportunities for characterizing vascular parameters such as pulse transit time. + +We would also argue that camera-based physiological sensing could be an influential technology in telehealth. Current telehealth procedures are mainly telephone or video-based communication services where patients see their physician or healthcare provider via Cisco Webex, Zoom or Microsoft Teams. Performing clinical visits at home increases the efficiency of clinical visits and helps people who live in remote locations. There is still a debate over whether high-quality care can be delivered over telehealth platforms. One of notable issues with current telehealth systems is that there is no way for physicians to assess patient's physiological states. The development of accurate and efficient non-contact camera-based physiological sensing technology would provide remote physicians access to the physiological data to make more informed clinical decisions. + +# D Investigation of PhysNet + +Table 6: PhysNet Ablation Results. Investigation of variants of PhysNet models on the PURE [23] and UBFC-rPPG [22] datasets obtained using the rPPG toolbox. + +
MethodTrain SetTest Set
PURE [23]UBFC-rPPG [22]
MAE↓MAPE↓MAE↓MAPE↓
PHYSNET-RAW [5]UBFC-RPPG19.2533.75N/AN/A
PUREN/AN/A11.1711.64
SCAMPS18.4031.7410.5711.04
PHYSNET-DIFFNORM [6]UBFC-RPPG8.0613.67N/AN/A
PUREN/AN/A0.981.12
SCAMPS13.3020.105.405.42
PHYSNET-RAW-TUNED [35]UBFC-RPPG4.808.46N/AN/A
PUREN/AN/A1.991.86
SCAMPS14.3523.0010.3310.71
+ +Metrics explained: MAE = Mean Absolute Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (\%). + +Based on the rPPG community's help and suggestions, we found that there are two important implementation details missed in the original PhysNet paper: 1) the raw input range has to be set to 0-1; 2) adding normalization to the model output. However, with these changes and the raw input format, PhysNet is not able to achieve results that are close to SOTA in UBFC and PURE. As shown in Table 6's PhysNet-Raw-Tuned, we further fine-tuned the training parameters (changed learning rate from 0.009 to 0.09) and post-processing bandpass filter parameters (set bandpass filtering parameters to [0.5,2.5], detrend value to 200). However, we don't recommend using these network specified parameters as it is not a fair comparison across different network architectures. This toolbox aims to provide a standardized training regime across all the networks and datasets. All in all, we recommend using DiffNorm frames as the input to PhysNet to make training easier to converge. + +We also notice that results while training on SCMAPS and testing on PURE are not good. It is worth noting that SCAMPS is a synthetic dataset and can easily cause overfitting with a complex network architecture. Unlike other 2D-CNN based networks, PhysNet is a 3D-CNN based network which is easier to overfit simple datasets. + +# E Network Recommendation + +In the dynamic realm of rPPG research, architectural choices critically influence the balance between efficiency and accuracy. This section elucidates our findings and offers recommendations on architectures, factoring both computational aspects and performance nuances. + +- Mobile and Computational Efficiency: For scenarios demanding computational frugality, such as mobile or edge deployments, 2D-CNN based architectures are recommended. They gracefully balance performance and computational overhead, ensuring expeditious responses in resource-limited environments. +- High-Performance Scenarios: 3D-CNN or Transformer-based networks, while resource-intensive, offer superior performance. These architectures are apt for applications with lenient resource constraints where the pinnacle of accuracy is sought. +- Performance Saturation: Notably, a saturation trend was observed across several network architectures. This insinuates diminishing performance returns with increased complexity or depth, emphasizing the need to pragmatically select architectures based on actual task necessities. + +A salient lesson from our research emphasizes the indispensability of diverse dataset incorporation. The rPPG community grapples with challenges such as: 1) motion-infused videos, introducing substantial rPPG signal noise; 2) videos under different lighting conditions (e.g., natural light, LED, incandescent), affecting visual cues critical for models; 3) featuring darker-skinned individuals, historically underrepresented, leading to potential model biases. To bolster robustness and cater to diverse real-world scenarios, it's imperative to amalgamate videos encapsulating the aforementioned conditions in training sets. Such holistic training paradigms ensure the model's aptitude in handling a vast spectrum of real-world challenges. + +# F Metric Details + +# F.1 rPPG Metrics + +We present explanations of metrics supported by our toolbox below. + +Mean Absolute Error (MAE): For predicted signal rate $R_{p}$ , ground truth signal rate $R_{g}$ , and for $N$ instances: + +$$ +M A E = \frac {1}{N} \sum_ {n = 1} ^ {N} | R _ {g} - R _ {p} | +$$ + +Root Mean Square Error (RMSE): For predicted signal rate $R_{p}$ , ground truth signal rate $R_{g}$ , and for $N$ instances: + +$$ +R M S E = \sqrt {\frac {1}{N} \sum_ {n = 1} ^ {N} (R _ {g} - R _ {p}) ^ {2}} +$$ + +Mean Absolute Percentage Error (MAPE): For predicted signal rate $R_{p}$ , ground truth signal rate $R_{g}$ , and for $N$ instances: + +$$ +M A P E = \frac {1}{N} \sum_ {n = 1} ^ {N} \left| \frac {R _ {g} - R _ {p}}{R _ {g}} \right| +$$ + +Pearson Correlation $(\rho)$ : For predicted signal rate $R_{p}$ , ground truth signal rate $R_{g}$ , and for $N$ instances, and $\overline{R}$ the average of $R$ for $N$ samples: + +$$ +\rho = \frac {\sum_ {n = 1} ^ {N} \left(R _ {g . n} - \overline {{R _ {g}}}\right) \left(R _ {p . n} - \overline {{R _ {p}}}\right)}{\sqrt {\left(\sum_ {n = 1} ^ {N} R _ {g . n} - \overline {{R _ {g}}}\right) ^ {2} \left(\sum_ {n = 1} ^ {N} R _ {p . n} - \overline {{R _ {p}}}\right) ^ {2}}} +$$ + +Signal-to-Noise Ratio (SNR): As in [30], we calculate the Signal-to-Noise Ratio (SNR) for a predicted signal as the ratio between the area under the curve of the power spectrum around the first and second harmonic of the ground truth heart rate frequency and the area under the curve of the rest of the power spectrum. This is mathematically represented as follows: + +$$ +S N R = 1 0 l o g _ {1 0} \left(\frac {\sum_ {4 5} ^ {1 5 0} (U _ {t} (f) S (f)) ^ {2}}{\sum_ {4 5} ^ {1 5 0} ((1 - U _ {t} (f)) S (f)) ^ {2}}\right) +$$ + +Where $S$ is the power spectrum of the estimated rPPG signal. $U_{t}(f)$ is equal to 1 around the first and second harmonics of the ground truth rPPG signal, while being 0 elsewhere in the power spectrum. In the context of the rPPG-Toolbox, only the power spectrum between $0.75\mathrm{Hz}$ and $2.5\mathrm{Hz}$ , or 45 beats/min and 150 beats/min, is considered. We report the mean of the SNR values calculated per video or test sample, such that: + +$$ +M S N R = \frac {1}{N} \sum_ {n = 1} ^ {N} S N R +$$ + +Standard Error $(\pm \mathrm{SE})$ : The standard error is a measure of the statistical accuracy of an estimate, such as the mean, and is equal to the standard deviation of the theoretical distribution of a large population of such estimates. The standard error takes into account the number of samples utilized in measurement, which is especially useful in the case of remote PPG datasets where the number of test samples can vary significantly from dataset to dataset. For all metrics except for the Pearson correlation $(\rho)$ , we calculate the standard error as: + +$$ +S E = \frac {\sigma}{\sqrt {n}} +$$ + +Where $\sigma$ is the standard deviation and $n$ is the number of samples. For the Pearson correlation $(\rho)$ , the standard error is calculated as: + +$$ +S E _ {\rho} = \sqrt {\frac {1 - r ^ {2}}{n - 2}} +$$ + +Where $r$ is the correlation coefficient and $n$ is the number of samples. Similar to how a standard deviation is reported, we report standard error as $\pm SE$ . + +# F.2 Additional Multitask Metrics + +We present explanations of additional metrics added to evaluate the BigSmall [41] model in order to exemplify how this toolbox can be extended to support physiological multitasking. + +Evaluated Action Units (AU): Similar to [41], and other AU literature, facial action metrics are calculated for the following 12 commonly used AUs: AU01, AU02, AU04, AU06, AU07, AU10, AU12, AU14, AU15, AU17, AU23, AU24. + +F1: The harmonic mean of recall and precision. For true positive count $TP$ , false positive count $FP$ , and false negative count $FN$ . + +$$ +F 1 = 1 0 0 * \frac {2 T P}{2 T P + F P + F N} +$$ + +Precision (Prec.): For true positive count $TP$ , and false positive count $FP$ : + +$$ +P r e c i s i o n = 1 0 0 * \frac {T P}{T P + F P} +$$ + +Accuracy (Acc.) For true positive count $TP$ , true negative count $TN$ , false positive count $FP$ , and false negative count $FN$ : + +$$ +A c c u r a c y = 1 0 0 * \frac {T P + T N}{T P + T N + F P + F N} +$$ + +# G Additional Results + +We reiterate results provided in the main paper and present additional results including the RMSE, SNR, Pearson correlation, and the corresponding standard errors. Note that there may be minor differences between results in the following tables and the main paper, as they were generated on a different machine using the latest version of the rPPG-Toolbox. + +Table 7: Benchmark Results. Performance on the PURE [23] and UBFC-rPPG [22] datasets obtained using the rPPG toolbox. For the supervised methods we show cross-dataset training results using the UBFC-rPPG, PURE, and SCAMPS datasets. + +
Test Set
PURE [23]
MethodTrain SetMAE↓RMSE↓MAPE↓ρ↑SNR↑
UNSUPERVISEDGREEN [28]N/A10.09 ± 2.8123.85 ± 217.8110.28 ± 2.330.34 ± 0.12-2.66 ± 1.43
ICA [29]N/A4.77 ± 2.0816.07 ± 153.844.47 ± 1.650.72 ± 0.095.24 ± 1.77
CHROM [30]N/A5.77 ± 1.7914.93 ± 81.5311.52 ± 3.750.81 ± 0.084.58 ± 0.85
LGI [33]N/A4.61 ± 1.9115.38 ± 134.144.96 ± 1.720.77 ± 0.084.50 ± 1.21
PBV [32]N/A3.92 ± 1.6112.99 ± 123.604.84 ± 1.490.84 ± 0.072.30 ± 1.31
POS [31]N/A3.67 ± 1.4611.82 ± 66.877.25 ± 3.030.88 ± 0.066.87 ± 0.95
SUPERVISEDTS-CAN [6]UBFC-RPPG3.69 ± 1.7413.8 ± 113.843.39 ± 1.440.82 ± 0.085.26 ± 1.11
SCAMPS4.66 ± 1.6813.69 ± 92.535.83 ± 2.030.82 ± 0.080.95 ± 1.04
PHYSNET [5]UBFC-RPPG8.06 ± 2.3419.71 ± 129.3413.67 ± 4.040.61 ± 0.116.68 ± 1.16
SCAMPS13.30 ± 2.0020.30 ± 94.8520.01 ± 2.970.51 ± 0.11-8.73 ± 0.78
PHYSFORMER [34]UBFC-RPPG12.92 ± 2.6924.36 ± 132.2423.92 ± 5.220.47 ± 0.122.16 ± 1.05
SCAMPS26.58 ± 2.1431.24 ± 133.3342.79 ± 4.060.12 ± 0.13-12.56 ± 0.53
DEEPPHYS [35]UBFC-RPPG5.54 ± 2.3018.51 ± 173.095.32 ± 1.900.66 ± 0.104.40 ± 1.32
SCAMPS3.96 ± 1.6713.44 ± 98.864.25 ± 1.600.83 ± 0.075.07 ± 1.15
EFF.PHYS-C [36]UBFC-RPPG5.47 ± 2.1017.04 ± 143.805.40 ± 1.760.71 ± 0.094.09 ± 1.16
SCAMPS10.24 ± 2.4821.65 ± 173.9611.70 ± 2.280.46 ± 0.12-5.49 ± 1.05
Test Set
UBFC-rPPG [22]
MethodTrain SetMAE↓RMSE↓MAPE↓ρ↑SNR↑
UNSUPERVISEDGREEN [28]N/A19.73 ± 3.7531.00 ± 235.3818.72 ± 3.330.37 ± 0.15-11.18 ± 1.63
ICA [29]N/A16.00 ± 3.0925.65 ± 163.5815.35 ± 2.770.44 ± 0.14-9.91 ± 1.78
CHROM [30]N/A4.06 ± 1.218.83 ± 33.933.84 ± 1.100.89 ± 0.07-2.96 ± 1.18
LGI [33]N/A15.80 ± 3.6728.55 ± 236.1714.70 ± 3.200.36 ± 0.15-8.15 ± 1.41
PBV [32]N/A15.90 ± 3.2526.40 ± 199.7115.17 ± 2.910.48 ± 0.14-9.16 ± 1.35
POS [31]N/A4.08 ± 1.017.72 ± 21.873.93 ± 0.910.92 ± 0.06-2.39 ± 1.14
SUPERVISEDTS-CAN [6]PURE1.30 ± 0.402.87 ± 3.051.50 ± 0.470.99 ± 0.021.49 ± 1.13
SCAMPS3.62 ± 0.916.92 ± 18.303.53 ± 0.840.93 ± 0.06-3.91 ± 0.98
PHYSNET [5]PURE0.98 ± 0.352.48 ± 2.551.12 ± 0.420.99 ± 0.021.09 ± 1.15
SCAMPS5.40 ± 1.4610.89 ± 48.535.43 ± 1.380.82 ± 0.09-4.97 ± 1.03
PHYSFORMER [34]PURE1.44 ± 0.543.77 ± 7.931.66 ± 0.620.98 ± 0.030.18 ± 1.12
SCAMPS4.56 ± 1.4610.48 ± 68.965.18 ± 1.930.81 ± 0.09-6.34 ± 0.80
DEEPPHYS [35]PURE1.21 ± 0.412.90 ± 3.751.42 ± 0.490.99 ± 0.021.74 ± 1.16
SCAMPS3.10 ± 1.449.81 ± 74.703.08 ± 1.320.87 ± 0.08-0.79 ± 1.22
EFF.PHYS-C [36]PURE2.07 ± 0.926.32 ± 32.012.10 ± 0.870.94 ± 0.05-0.12 ± 1.20
SCAMPS12.64 ± 3.1523.99 ± 182.4411.26 ± 2.670.34 ± 0.15-9.36 ± 1.05
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), RMSE = Root Mean Square Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (%), $\rho =$ Pearson Correlation in HR estimation, SNR = Signal-to-Noise Ratio (dB) when comparing predicted spectrum to ground truth spectrum. + +Table 8: Benchmark Results. Performance on the UBFC-Phys [27] and MMPD [25] datasets generated using the rPPG toolbox. For the supervised methods we show cross-dataset training results using the UBFC-rPPG, PURE and SCAMPS datasets. + +
Test Set +UBFC-Phys [27]
MAE↓RMSE↓MAPE↓ρ↑SNR ↑
UNSUPERVISEDGREEN [28]N/A13.55 ± 1.3018.80 ± 48.8716.01 ± 1.420.29 ± 0.10-10.34 ± 0.65
ICA [29]N/A10.04 ± 1.2015.73 ± 43.6311.85 ± 1.350.36 ± 0.09-5.28 ± 0.98
CHROM [30]N/A4.49 ± 0.607.56 ± 13.846.00 ± 0.880.80 ± 0.06-1.92 ± 0.85
LGI [33]N/A6.27 ± 0.8310.41 ± 22.767.83 ± 0.990.70 ± 0.07-3.30 ± 0.91
PBV [32]N/A12.34 ± 1.2217.43 ± 47.2414.63 ± 1.330.33 ± 0.09-9.33 ± 0.71
POS [31]N/A4.51 ± 0.688.16 ± 17.366.12 ± 0.990.77 ± 0.06-1.28 ± 0.90
SUPERVISEDTS-CAN [6]UBFC-RPPG5.13 ± 0.638.12 ± 18.476.53 ± 0.850.76 ± 0.07-1.95 ± 0.81
PURE5.72 ± 0.668.78 ± 16.947.34 ± 0.900.72 ± 0.07-3.72 ± 0.78
SCAMPS5.55 ± 0.678.71 ± 16.966.91 ± 0.850.72 ± 0.07-4.40 ± 0.66
PHYSNET [5]UBFC-RPPG5.79 ± 0.769.60 ± 17.647.69 ± 1.070.70 ± 0.07-1.63 ± 0.99
PURE4.78 ± 0.728.68 ± 18.996.15 ± 0.980.73 ± 0.07-0.71 ± 1.00
SCAMPS8.53 ± 0.9813.02 ± 33.0811.22 ± 1.350.43 ± 0.10-7.15 ± 0.60
PHYSFORMER [34]UBFC-RPPG6.63 ± 0.7710.22 ± 18.128.91 ± 1.120.69 ± 0.07-3.58 ± 0.93
PURE6.04 ± 0.769.77 ± 18.387.67 ± 0.990.65 ± 0.08-2.16 ± 0.95
SCAMPS11.91 ± 1.1316.42 ± 46.4315.57 ± 1.640.27 ± 0.097-10.38 ± 0.39
DEEPPHYS [35]UBFC-RPPG6.62 ± 0.8410.69 ± 25.908.21 ± 1.040.66 ± 0.08-2.98 ± 0.82
PURE8.42 ± 1.0913.80 ± 38.0610.18 ± 1.290.44 ± 0.09-4.41 ± 0.84
SCAMPS4.75 ± 0.587.50 ± 14.475.89 ± 0.720.82 ± 0.06-2.04 ± 0.76
EFF.PHYS-C [36]UBFC-RPPG4.93 ± 0.587.65 ± 14.446.25 ± 0.790.79 ± 0.06-2.09 ± 0.82
PURE5.31 ± 0.789.44 ± 27.676.61 ± 0.960.70 ± 0.07-2.22 ± 0.81
SCAMPS6.97 ± 0.7910.58 ± 22.708.47 ± 0.910.64 ± 0.08-7.38 ± 0.47
Test Set +MMPD [25]
MethodTrain SetMAE↓RMSE↓MAPE↓ρ↑SNR ↑
SUPERVISEDGREEN [28]N/A21.68 ± 0.6727.69 ± 42.2124.39 ± 0.64-0.01 ± 0.04-14.34 ± 0.26
ICA [29]N/A18.60 ± 0.6124.30 ± 33.8020.88 ± 0.580.01 ± 0.04-13.84 ± 0.27
CHROM [30]N/A13.66 ± 0.5018.76 ± 23.8216.00 ± 0.570.08 ± 0.04-11.74 ± 0.21
LGI [33]N/A17.08 ± 0.6223.32 ± 34.4618.98 ± 0.600.04 ± 0.04-13.15 ± 0.25
PBV [32]N/A17.95 ± 0.6023.58 ± 32.4520.18 ± 0.580.09 ± 0.04-13.88 ± 0.24
POS [31]N/A12.36 ± 0.4917.71 ± 23.6514.43 ± 0.550.18 ± 0.04-11.53 ± 0.22
SUPERVISEDTS-CAN [6]UBFC-RPPG14.01 ± 0.6121.04 ± 30.0215.48 ± 0.610.24 ± 0.04-10.18 ± 0.28
PURE13.94 ± 0.6421.61 ± 33.0215.15 ± 0.630.20 ± 0.04-9.94 ± 0.27
SCAMPS19.05 ± 0.5824.20 ± 31.9021.77 ± 0.600.14 ± 0.04-13.24 ± 0.25
PHYSNET [5]UBFC-RPPG9.47 ± 0.5016.01 ± 22.7411.11 ± 0.580.31 ± 0.04-8.15 ± 0.26
PURE13.93 ± 0.5720.29 ± 27.5715.61 ± 0.590.17 ± 0.04-10.59 ± 0.27
SCAMPS20.78 ± 0.5525.09 ± 31.9224.43 ± 0.620.17 ± 0.04-15.86 ± 0.20
PHYSFORMER [34]UBFC-RPPG12.1 ± 0.5117.79 ± 23.7715.41 ± 0.740.17 ± 0.04-10.53 ± 0.20
PURE14.57 ± 0.5720.71 ± 29.116.73 ± 0.630.15 ± 0.039-12.15 ± 0.22
SCAMPS22.69 ± 0.5726.94 ± 34.4127.06 ± 0.700.15 ± 0.04-18.83 ± 0.32
DEEPPHYS [35]UBFC-RPPG17.50 ± 0.7025.00 ± 38.6219.27 ± 0.680.06 ± 0.04-11.72 ± 0.33
PURE16.92 ± 0.7024.61 ± 38.0318.54 ± 0.680.05 ± 0.04-11.53 ± 0.31
SCAMPS15.22 ± 0.6823.17 ± 38.4616.56 ± 0.660.09 ± 0.04-10.23 ± 0.31
EFF.PHYS-C [36]UBFC-RPPG13.78 ± 0.6822.25 ± 37.9415.15 ± 0.700.09 ± 0.04-9.13 ± 0.31
PURE14.03 ± 0.6421.62 ± 32.9515.32 ± 0.630.17 ± 0.04-9.95 ± 0.29
SCAMPS20.41 ± 0.5725.06 ± 31.7223.52 ± 0.610.11 ± 0.04-14.28 ± 0.24
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), RMSE = Root Mean Square Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (%), $\rho =$ Pearson Correlation in HR estimation, $\mathrm{SNR} =$ Signal-to-Noise Ratio (dB) when comparing predicted spectrum to ground truth spectrum. + +Table 9: Training with Motion-Augmented Data. We demonstrate results training on a motion-augmented (MA) version of the UBFC-rPPG [22] dataset generated using an open-source motion augmentation pipeline [43] and testing on the unaugmented versions of the PURE [23] dataset, UBFC-Phys [27], and MMPD [25] datasets. + +
Training Set +Testing Set +Metric (± Std. Err.)MAUBFC-rPPG [22] +PURE [23]
MAE↓RMSE ↓MAPE↓ρ↑SNR ↑
Supervised +TS-CAN [6]1.07 ± 0.755.89 ± 33.751.20 ± 0.830.97 ± 0.038.86 ± 0.95
PhysNet (Normalized) [5]17.03 ± 2.9728.50 ± 149.1632.37 ± 5.820.38 ± 0.127.27 ± 0.88
DeepPhys [35]1.15 ± 0.765.95 ± 33.751.40 ± 0.850.97 ± 0.039.94 ± 1.00
EfficientPhys-C [36]2.59 ± 1.4311.29 ± 96.012.67 ± 1.270.88 ± 0.066.75 ± 1.12
+ +
Training Set +Testing Set +Metric (± Std. Err.)MAUBFC-rPPG [22] +UBFC-Phys [27]
MAE↓RMSE↓MAPE↓ρ↑SNR ↑
Supervised +TS-CAN [6] +PhysNet (Normalized) [5] +DeepPhys [35] +EfficientPhys-C [36]5.03 ± 0.678.39 ± 18.266.36 ± 0.900.75 ± 0.07-1.15 ± 0.81
5.51 ± 0.880.44 ± 37.657.50 ± 1.320.68 ± 0.07-0.57 ± 1.08
4.95 ± 0.678.37 ± 21.536.26 ± 0.900.75 ± 0.07-0.78 ± 0.85
4.80 ± 0.587.52 ± 15.026.10 ± 0.790.79 ± 0.06-0.87 ± 0.86
+ +
Training Set +Testing Set +Metric (± Std. Err.)MAUBFC-rPPG [22] +MMPD [25]
MAE↓RMSE↓MAPE↓ρ↑SNR ↑
Supervised
TS-CAN [6]12.59 ± 0.6220.23 ± 31.2713.77 ± 0.620.23 ± 0.04-9.19 ± 0.29
PhysNet (Normalized) [5]10.68 ± 0.4916.56 ± 19.7214.01 ± 0.720.32 ± 0.04-9.28 ± 0.21
DeepPhys [35]12.71 ± 0.6521.04 ± 35.4013.70 ± 0.640.21 ± 0.04-8.85 ± 0.31
EfficientPhys-C [36]13.42 ± 0.6621.64 ± 35.4614.52 ± 0.650.14 ± 0.04-9.20 ± 0.31
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), RMSE = Root Mean Square Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (\%), $\rho =$ Pearson Correlation in HR estimation, SNR = Signal-to-Noise Ratio (dB) when comparing predicted spectrum to ground truth spectrum. + +Table 10: Training with Pseudo Labels. For the supervised methods we show results training with the (entire) BP4D+ [26] dataset, using POS [31] derived pseudo training labels. + +
Training SetBP4D+[26] with POS Pseudo Labels
Testing SetUBFC-rPPG [22]
Metric (± Std. Err.)MAE↓RMSE↓MAPE↓ρ↑SNR ↑
Supervised
TS-CAN [6]4.69 ± 1.8813.04 ± 100.154.51 ± 1.650.78 ± 0.100.01 ± 1.27
PhysNet(Normalized) [5]1.78 ± 0.674.68 ± 11.941.92 ± 0.720.96 ± 0.041.24 ± 1.08
DeepPhys [35]2.74 ± 0.966.78 ± 27.432.81 ± 0.910.93 ± 0.06-0.22 ± 1.33
EfficientPhys-C [36]2.43 ± 1.298.68 ± 67.512.52 ± 1.200.90 ± 0.070.39 ± 1.27
+ +
Training SetBP4D+[26] with POS Pseudo Labels
Testing Set
Metric (± Std. Err.)MAE↓RMSE ↓MAPE↓ρ↑SNR ↑
Supervised
TS-CAN [6]1.29 ± 0.766.00 ± 33.741.60 ± 0.860.97 ± 0.038.61 ± 1.02
PhysNet(Normalized) [5]3.69 ± 1.4611.79 ± 64.427.35 ± 3.010.88 ± 0.068.33 ± 0.06
DeepPhys [35]2.47 ± 1.4111.11 ± 93.022.49 ± 1.210.89 ± 0.0617.32 ± 1.09
EfficientPhys-C [36]3.59 ± 1.8414.55 ± 135.513.27 ± 1.500.80 ± 0.087.48 ± 1.15
+ +MAE = Mean Absolute Error in HR estimation (Beats/Min), RMSE = Root Mean Square Error in HR estimation (Beats/Min), MAPE = Mean Percentage Error (\%), $\rho =$ Pearson Correlation in HR estimation, SNR = Signal-to-Noise Ratio (dB) when comparing predicted spectrum to ground truth spectrum. + +Table 11: Full 3-Fold Multitasking Results. For the BigSmall [41] method we show the full 3-fold results for multi-tasking PPG, respiration, and action unit classification; training and evaluating on the BP4D+ [26] (AU subset) dataset, using POS [31] derived pseudo training PPG labels. + +
Training Set +Testing Set +FoldBP4D+[26] +BP4D+[26]
Fold 1Fold 2Fold 3
Heart +(Metric ± Std. Err.)MAE↓4.24 ± 0.732.91 ± 0.492.54 ± 0.48
RMSE↓10.76 ± 33.207.26 ± 13.907.06 ± 16.67
MAPE↓4.55 ± 0.743.22 ± 0.532.75 ± 0.49
ρ↑0.68 ± 0.050.90 ± 0.030.91 ± 0.03
SNR↑3.85 ± 0.696.27 ± 0.676.53 ± 0.63
Respiration +(Metric ± Std. Err.)MAE↓5.28 ± 0.314.96 ± 0.335.34 ± 0.35
RMSE↓6.74 ± 4.386.67 ± 4.967.18 ± 5.12
MAPE↓24.41 ± 1.5525.30 ± 2.0829.14 ± 2.72
ρ↑0.15 ± 0.070.16 ± 0.720.12 ± 0.07
SNR↑7.69 ± 0.6410.53 ± 0.759.34 ± 0.64
Facial Action (AU) +(F1↑, Prec.)AU0118.6211.0418.8811.34
AU0220.7612.7318.2810.89
AU0412.578.0811.487.85
AU0666.7366.5864.7161.09
AU0774.8678.6870.0875.10
AU1074.9277.3270.0974.48
AU1272.6970.7967.7568.54
AU1467.2172.8470.1869.11
AU1522.5613.9122.3313.38
AU1725.7718.0120.9512.45
AU2334.6427.4134.2124.19
AU247.003.7110.706.20
Facial Action (AU) +(Metric Mean)F1↑41.5339.97
Prec.↑36.4236.22
Acc. (%)↑61.9162.42
+ +For HR estimation, MAE = Mean Absolute Error, RMSE = Root Mean Square Error, MAPE = Mean Percentage Error (%), $\rho =$ Pearson Correlation, SNR = Signal-to-Noise Ratio (dB) when comparing predicted spectrum to ground truth spectrum. For AU classification F1 = harmonic mean of precision and recall, Prec. = precision, Acc. = accuracy. + +# H UBFC-Phys Video Exclusion + +For evaluation of the UBFC-Phys [27] dataset in our main paper and by default in our toolbox, we utilized all three tasks and the same subject exclusion, or conversely sub-selection, list provided by the authors of the dataset in the second supplementary material of their paper [27]. Based on the aforementioned supplementary material, we eliminated 14 subjects (s3, s8, s9, s26, s28, s30, s31, s32, s33, s40, s52, s53, s54, s56) for the rest task (T1), 30 subjects (s1, s4, s6, s8, s9, s11, s12, s13, s14, s19, s21, s22, s25, s26, s27, s28, s31, s32, s33, s35, s38, s39, s41, s42, s45, s47, s48, s52, s53, s55) for the speech task (T2), and 23 subjects (s5, s8, s9, s10, s13, s14, s17, s22, s25, s26, s28, s30, s32, s33, s35, s37, s40, s47, s48, s49, s50, s52, s53) for the arithmetic task (T3). + +In our toolbox, video exclusion is achieved using dataset filtering criteria specified in the config file. Specifically, an exclusion list or a task selection list can be provided to respectively exclude videos from being included or to select specific tasks as a part of a dataset. + +# I Multitasking Training and Evaluation Details + +To show how this toolbox may be extended for physiological multitasking, we implement BigSmall [41] a model that multitasks PPG, respiration, and facial action. Here we reiterate information from [41], with slight modifications, for clarification. + +# I.1 Cross Validation Subject Folds + +Fold 1: F003, F004, F005, F006, F009, F017, F022, F028, F029, F031, F032, F033, F038, F044, F047, F048, F052, F053, F055, F051, F061, F063, F067, F068, F074, F075, F076, F081, M003, M005, M006, M009, M012, M019, M025, M026, M028, M031, M036, M037, M040, M046, M047, M049, M051, M054, M056. +Fold 2: F001, F002, F008, F018, F021, F025, F026, F035, F036, F037, F039, F040, F041, F042, F046, F049, F057, F058, F060, F062, F064, F066, F070, F071, F072, F073, F077, F082, M001, M002, M007, M013, M014, M016, M022, M023, M024, M027, M029, M030, M034, M035, M041, M042, M043, M048, M055. +Fold 3: F007, F010, F011, F012, F013, F014, F015, F016, F019, F020, F023, F024, F027, F030, F034, F043, F045, F050, F051, F054, F056, F059, F065, F069, F078, F079, F080, M004, M008, M010, M011, M015, M017, M018, M020, M021, M032, M033, M038, M039, M044, M045, M050, M052, M053, M057, M058. + +# I.2 AU Subset + +The AU subset used for model training and evaluation (in this toolbox) is made up of dataset subset which contains action unit labels. This consists of approximately 20 seconds worth of data from the following tasks for each subject: T1, T6, T7, T8. + +# I.3 Subject Fold Splits + +[41] is evaluated using 3 fold cross validation, where the folds are comprised of trials from mutually exclusive subjects in the dataset. These subject-wise folds are outlined below. + +# J Additional Features + +# J.1 Pre-processed Data Visualization + +Pre-processing is an important aspect of the rPPG task that we hope to help standardize using our toolbox. It is advantageous to be able to quickly visualize and visually evaluate pre-processed image data and ground truth signals. Image data in particular can be especially useful to observe in order to inspect the effectiveness of out-of-the-box face detection and cropping techniques used in our toolbox, and to ultimately get an idea as to how much of the face region is visible in a given video. We provide simple Jupyter Notebooks for quickly visualizing image data and ground truth signals pre-processed by our toolbox. Further details regarding these notebooks can be found in our GitHub repo and the associated README. + +# J.2 Training Loss, Validation Loss, and Learning Rate Visualization + +The rPPG-Toolbox assumes certain defaults across most config files for supervised methods, including a default learning rate of 0.009 used alongside the Adam [38] or AdamW [39] optimizers, a criterion such as mean squared error (MSE) loss or Negative Pearson Correlation Loss, and the 1 cycle learning rate scheduler [40] are utilized for training. An exception is with BigSmall [41], which uses a default learning rate of 0.001 that remains constant throughout training. It can be valuable to visualize losses such as those involved in training or validation phases. Furthermore, it may be useful to simultaneously visualize the learning rate, especially when users stray from the defaults in order to target an optimal set of training, validation, and testing parameters for their research efforts. The toolbox's config contains parameters that enable the visualization of the training loss, validation loss, and the learning rate for any given supervised method. + +# J.3 Bland-Altman Plots + +We provide Bland-Altman plots as an additional metric in the rPPG-Toolbox. Users can enable the plots via an evaluation parameter in the config file, and will be given further options to configure the plots as the toolbox is refined and expanded. For more details, please refer to the GitHub repo and the associated README. + +# J.4 Motion Analysis + +We also provide scripts that leverage OpenFace [44] for extracting, visualizing, and analyzing motion in rPPG video datasets. Specifically, we include a Python script to convert datasets into the .mp4 format for subsequent analysis by OpenFace, a shell script that leverages OpenFace to perform both rigid and non-rigid head motion analysis, and a separate Python script that plots exemplar plots that showcase comparisons of motion between different datasets. Further details can be found in our GitHub repo and the associated README. \ No newline at end of file diff --git a/rppgtoolboxdeepremoteppgtoolbox/images.zip b/rppgtoolboxdeepremoteppgtoolbox/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c8c27759798a5caa771e99025b98353682bc9876 --- /dev/null +++ b/rppgtoolboxdeepremoteppgtoolbox/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:266e0eae2b21d8e2a43afdc322714f3b54e3ddd43abf78981cdc2c614c4e41ed +size 1613403 diff --git a/rppgtoolboxdeepremoteppgtoolbox/layout.json b/rppgtoolboxdeepremoteppgtoolbox/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6e2fdc597c8da7a91d835d4c713eeafb57ef9556 --- /dev/null +++ b/rppgtoolboxdeepremoteppgtoolbox/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca85d5e39a927d7b90ad745f9e94d8fd920fa7355aa36d8400c77436addf4f07 +size 568948 diff --git a/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_content_list.json b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c07dfc8456b1874a2cf77114d4b5884e86dbb3b3 --- /dev/null +++ b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac7b73afe37893751f1805a3604d378f5541a648477f940acf2ff0a2918a3b1a +size 66503 diff --git a/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_model.json b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2ab18ca164e47c748c2f9af8b5fba8fe1ff20d56 --- /dev/null +++ b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da7d405225fb6f739d78840042e45ca2113c099d9f853a2f46d03231b16dcaae +size 79967 diff --git a/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_origin.pdf b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3e4b16939facff01f5a3ffc895e11b09eb9e8fa1 --- /dev/null +++ b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/ab71f100-7bbb-444d-9ce5-a23368f05f0c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:750315c52ee5a9a54f371a07ab215c7869f2e52ebd3b5a07718513f1583739ba +size 2036467 diff --git a/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/full.md b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7410b615027b7331e7b347feb17c782c71246f55 --- /dev/null +++ b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/full.md @@ -0,0 +1,264 @@ +# Trajdata: A Unified Interface to Multiple Human Trajectory Datasets + +Boris Ivanovic1 Guanyu Song2 Igor Gilitschenski2 Marco Pavone1,3 +1NVIDIA Research University of Toronto 3Stanford University + +# Abstract + +The field of trajectory forecasting has grown significantly in recent years, partially owing to the release of numerous large-scale, real-world human trajectory datasets for autonomous vehicles (AVs) and pedestrian motion tracking. While such datasets have been a boon for the community, they each use custom and unique data formats and APIs, making it cumbersome for researchers to train and evaluate methods across multiple datasets. To remedy this, we present trajdata: a unified interface to multiple human trajectory datasets. At its core, trajdata provides a simple, uniform, and efficient representation and API for trajectory and map data. As a demonstration of its capabilities, in this work we conduct a comprehensive empirical evaluation of existing trajectory datasets, providing users with a rich understanding of the data underpinning much of current pedestrian and AV motion forecasting research, and proposing suggestions for future datasets from these insights. trajdata is permissively licensed (Apache 2.0) and can be accessed online at https://github.com/NVlabs/trajdata. + +# 1 Introduction + +Research in trajectory forecasting (i.e., predicting where an agent will be in the future) has grown significantly in recent years, partially owing to the success of deep learning methods on the task [1]; availability of new large-scale, real-world datasets (see Fig. 1); and investment in its deployment within domains such as autonomous vehicles (AVs) [2,3,4,5,6,7,8,9] and social robots [10,11,12]. + +In addition, recent dataset releases have held associated prediction challenges which have periodically benchmarked the field and spurned new developments [13, 14, 15, 16]. While this has been a boon for research progress, each dataset has a unique data format and development API, making it cumbersome for researchers to train and evaluate methods across multiple datasets. For instance, the recent Waymo Open Motion dataset employs binary TFRecords [17] which differ significantly from nuScenes' foreign-key format [18] and Woven Planet (Lyft) Level 5's compressed zarr files [19]. The variety of data formats has also hindered research on topics which either require or greatly benefit from multi-dataset comparisons, such as prediction model generalization (e.g., [20, 21]). To remedy this, we present trajdata: a unified interface to multiple human trajectory datasets. + +Contributions. Our key contributions are threefold. First, we introduce a standard and simple data format for trajectory and map data, as well as an extensible API to access and transform such data for research use. Second, we conduct a comprehensive empirical evaluation of existing trajectory datasets, providing users with a richer understanding of the data underpinning much of pedestrian and AV motion forecasting research. Finally, we leverage insights from these analyses to provide suggestions for future dataset releases. + +![](images/b2a7a0304134386b1f6b1aecd5a7585ab494cd59f3717efdd29c0b6d899e8ea1.jpg) +Figure 1: Recent datasets provide access to thousands of hours of autonomous driving data, albeit with different data formats and APIs, complicating the use of multiple datasets in research projects. + +# 2 Related Work + +Human Trajectory Datasets. Initial trajectory forecasting research employed video motion tracking datasets for benchmarking, primarily due to the availability of annotated agent positions over time. Of these, the ETH [22] and UCY [23] pedestrian datasets were among the most widely-used [1], containing a total of 1536 pedestrians and challenging behaviors such as couples walking together, groups crossing each other, and groups forming and dispersing. Soon after the successful application of deep learning models to pedestrian trajectory forecasting [24], and as data needs grew in autonomous driving research and industry, numerous large-scale datasets have emerged containing significantly more heterogeneous-agent interactive scenarios (e.g., between vehicles and pedestrians) in urban environments. Fig. [1] visualizes the scale, collection, and annotation strategy of such datasets, with a comprehensive review of earlier human motion datasets available in [1, 25]. In particular, the gradual shift from human annotation to autolabeling can be seen, with the recent large-scale Yandex Shifts [26], Waymo Open Motion [17], and nuPlan [27] datasets employing powerful autolabeling pipelines to accurately label sensor data collected by vehicle fleets at scale. + +Multi-Dataset Benchmarking. While the increase in datasets and associated challenges has bolstered research, their unique formats increase the complexity of evaluating methods across datasets, complicating efforts to analyze, e.g., prediction model generalization. To address this issue for pedestrian motion data, OpenTraj [25] created dataloaders for different pedestrian motion datasets as part of its effort to evaluate and compare motion complexity across pedestrian datasets. More recently, TrajNet++ [28] and Atlas [29] present multi-dataset benchmarks to systematically evaluate human motion trajectory prediction algorithms in a unified framework. While these efforts have provided the community with multi-dataset benchmarks, they are primarily focused on pedestrian data. In contrast, trajdata tackles the standardization of both pedestrian and autonomous vehicle datasets, including additional data modalities such as maps. + +# 3 trajdata: A Unified Interface to Multiple Human Trajectory Datasets + +trajdata is a software package that efficiently compiles multiple disparate dataset formats into one canonical format, with an API to access and transform that data for use in downstream frameworks (e.g., PyTorch [30], which is natively supported). Currently, trajdata supports 8 diverse datasets, comprising 3,216 hours of data, $200+$ million unique agents, and $10+$ locations across 7 countries (see Table I). To date, trajdata has been extensively used in research on trajectory forecasting [21], pedestrian [31] and vehicle [32, 33] simulation, and AV motion planning [34, 35]. + +# 3.1 Standardized Trajectory and Map Formats + +Trajectories. For each dataset, trajdata extracts position, velocity, acceleration, heading, and extent (length, width, height) information for all agents in standard SI units (see Fig. 2). In order to support a variety of dataset formats, trajdata has minimal base data requirements: As long as agent positions (i.e., $x$ , $y$ coordinates) are provided, all other dynamic information can be derived automatically. If entire dynamic quantities (e.g., velocity) are not captured in the original dataset, trajdata uses finite differences to compute derivatives by default. Further, missing data between + +Table 1: Datasets currently supported by trajdata. More details can be found in the appendix. + +
DatasetSizeLocationsMaps?DatasetSizeLocationsMaps?
ETH [22]0.4h2NoINTERACTION [39]16.5h4Yes
UCY [23]0.3h2NoLyft Level 5 [19]1118h1Yes
SDD [40]5h1NoWaymo Open [17]570h6Yes
nuScenes [18]5.5h2YesnuPlan [27]1500h4Yes
+ +![](images/89d9d6279f1d6ca8d2f48fe33205f58b4b097090f54f34460186c9e15f84c79c.jpg) +Figure 2: Left: trajdata adopts a tabular representation for trajectory data, leveraging advanced indexing to satisfy user data queries. Right: Agent trajectories from the nuScenes [18] dataset visualized on the scene's VectorMap, containing all of trajdata's core map elements. + +timesteps is imputed via linear interpolation. trajdata internally represents and stores trajectory data as tabular data frames, allowing for advanced indexing and data grouping depending on user queries and the use of efficient open-source tabular data storage frameworks such as Apache Arrow [36]. Note that each of these default choices (finite differences, linear interpolation, and tabular data frames) can be changed by the end user. + +Maps. To retain the most information from high-definition (HD) dataset maps, trajdata adopts a polyline representation for map data. This choice matches the vast majority of modern trajectory datasets which provide vector map data and makes them immediately compatible with our format. Currently, there are four core map elements: RoadLane, RoadArea, PedCrosswalk, and PedWalkway. As illustrated in Fig. 2 a RoadLane represents a driveable road lane with a centerline and optional left and right boundaries. A RoadArea represents other drivable areas of roads which are not part of lanes, e.g., parking lots or shoulders. A PedCrosswalk denotes a marked area where pedestrians can cross the road. Finally, a PedWalkway marks sidewalks adjacent to roads. Of these, only RoadLane elements are required to be extracted, other elements are optional (they are absent in some datasets). Our map format additionally supports lane connectivity information in the form of left/right adjacent lanes (i.e., lanes accessible by left/right lane changes) and successor/predecessor lanes (i.e., lanes that continue from / lead into the current lane following the road direction). + +Each map element is designed to be compatible with popular computational geometry packages, such as Shapely [37], enabling efficient set-theoretic queries to calculate, e.g., road boundary violations. By default, trajdata serializes map data using Google protocol buffers [38], and, in particular, only stores neighboring position differences for efficiency, similar to the implementation used in [19]. Dynamic traffic light information is also supported, and trajdata makes use of a separate data frame to link the traffic signal shown per timestep with the lane ID being controlled. + +# 3.2 Core trajdata Functionalities + +Multi-dataset training and evaluation. One of trajdata's core functionalities is aggregating data from multiple datasets in a UnifiedDataset object (a PyTorch Dataset subclass by default). + +```python +from trajdata import UnifiedDataset +dataset = UnifiedDataset() +desired_data = ["nusc_miniboston", "sdd-train"], desired_dt=0.1, +centric="agent", history(sec=(1.0, 3.0), future(sec=(4.0, 4.0)) # These settings were used to create Figure 2. +``` + +![](images/79b150ce76f7d5bef1125139088e1ad9adde4dfea7148eeb879e317d0f12d131.jpg) +Figure 3: trajdata can provide agent-centric (or scene-centric) batches of trajectory data for model training and evaluation in associated AgentBatch (or SceneBatch) objects. The indexing and padding strategy of a few core AgentBatch tensors are visualized here. + +The example above creates a dataset that provides agent-centric data batches (i.e., each batch element contains data for one agent at one timestep, see Fig. 3) sourced from only Boston in the nuScenes mini dataset ("nusc_mini-boston") as well as the Stanford Drone Dataset's entire training split ("sdd-train"), with time upsampling ensuring all data is at $10\mathrm{Hz}$ (desired_dt=0.1). history_sec=(1.0, 3.0) specifies that the predicted agent's trajectory must have at least $1.0s$ of history available, with padding for any missing data up to $3.0s$ (see Fig. 3). Similarly, future_sec=(4.0, 4.0) requires that the predicted agent's trajectory have $4.0s$ of future available. + +trajdata provides many other capabilities in addition to the above, including scene-centric batches (i.e., data for all agents in a scene at the same timestep), semantic search (e.g., nuScenes [18] provides text descriptions for each scene), agent filtering (e.g., only vehicles), coordinate frame standardization (i.e., making trajectories relative to the predicted agent's frame at the current timestep), map rasterization (e.g., if encoding scene context with a convolutional architecture), data augmentations (e.g., additive Gaussian noise to past trajectories), and general data transforms via custom functions. + +Map API. trajdata's standardized vector map object is VectorMap. In addition to providing access to individual map elements (e.g., lanes, sidewalks), it also leverages precomputed spatial indices to make nearest neighbor queries very efficient. + +```python +from trajdata import MapAPI, VectorMap +vec_map: VectorMap = MapAPI(<=>).get_map("nusc-mini:boston-seaport") +lane = vec_map.get_closest_lane(np.array([50.0, 100.0, 0.0])) +``` + +In the example above, the polyline map of Boston's seaport neighborhood (from nuScenes [18]) is loaded from the user's trajdata cache (its path would be specified instead of $<=$ ) and queried for the closest RoadLane to a given $x, y, z$ position. + +Simulation Interface. trajdata also provides a simulation interface that enables users to initialize a scene from real-world data and simulate agents from a specific timestep onwards. Simulated agent motion is recorded by trajdata and can be analyzed with a library of evaluation metrics (e.g., collision and offroad rates, statistical differences to real-world data distributions) or exported to disk. This functionality was extensively used to benchmark learning-based traffic models in [32, 33]. + +```python +from trajdata.simulation import SimulationScene +simscene = SimulationScene(<=>) # Specify initial scene to use. +obs = simscene.reset() # Initialized from real agent states in data. +for t in range(10): # Simulating 10 timesteps in this example. + new_state_dict = ... # Compute the new state of sim agents. + obs = simscene.Step(new_state_dict) +``` + +In this example, a SimulationScene is initialized from a scene in an existing dataset (specified with the <=> arguments), after which it can be accessed similarly to an OpenAI Gym [41] reinforcement learning environment, using methods like reset and step. + +# 4 Dataset Comparisons and Analyses + +In this section, we leverage trajdata's standardized trajectory and map representations to directly compare many popular AV and pedestrian trajectory datasets along a variety of metrics. Our goal is to provide a deeper understanding of the datasets underpinning much of human motion research by analyzing their data distributions, motion complexity, and annotation quality. + +![](images/c931ed8e26cced4da7adc929b0fdb1c9f6ababf1377a94237f94af830cf1b5b3.jpg) + +![](images/516cf03e97691e681990e72210e25e736fdecd3ad90d00b12824213be1f5ff16.jpg) +Figure 4: Left: Number of unique agents per dataset. Right: Distribution of agent types per dataset. + +Note that we only analyze dataset training and validation splits, since these are the splits predominantly used by methods for development. We explicitly do not analyze test splits since they are either not available publicly or because doing so may harm existing benchmark validity. Further, while trajdata supports data frequency up- and down-scaling via interpolation and down-sampling, all of the following analyses were conducted in each dataset's native data resolution. All analyses were performed using the latest version of trajdata at the time of writing (v1.3.2) on a desktop computer with 64 GB of RAM and an AMD Ryzen Threadripper PRO 3975WX 32-core CPU. For larger datasets, an NVIDIA DGX-1 server with 400 GB of RAM and 64 CPU cores was used. + +# 4.1 Agent Distributions + +Population. To build a fundamental understanding of the considered datasets, we first analyze and compare agent populations. Fig. 4 visualizes overall agent counts and proportions per dataset. As can be expected, modern large-scale AV datasets such as Waymo [17] and Lyft Level 5 [19] contain multiple orders of magnitude more agents than earlier pedestrian datasets SDD [40], ETH [22], or UCY [23]. However, as we will show later, pedestrian datasets still provide value in terms of agent diversity, density, and motion complexity in popular social robotics settings such as college campuses. + +As can be seen in Fig. 4 (right), the vast majority of agents in AV datasets are vehicles or pedestrians, with the exception of Lyft Level 5 [19] where $71.8\%$ of agents have unknown types. In contrast, bicycles (a relatively niche category in many datasets) account for $41\%$ of all agents in SDD [40] (indeed, biking is a popular method of transportation around Stanford's large campus). Such imbalances in agent populations are indicative of real-world distributions, e.g., motorcycles make up only $3.5\%$ of vehicles in the USA [42], similar to their proportion in nuScenes [18] $(1.6\%)$ . + +Density and Observation Duration. In addition to which agent types are captured in scenes, the amount and density of agents can be an important desiderata (e.g., for research on crowd behavior) or computational consideration (e.g., for methods whose runtime scales with the number of agents). Fig.5 visualizes the distribution of the number of agents observed per scene per timestep (left), as well as the maximum number of simultaneous agents per scene (right). As can be seen, urban scenarios captured in modern AV datasets frequently contain $100+$ detected agents (with a long tail extending to $250+$ agents). In this respect, ETH [22], UCY [23], and INTERACTION [39] are limited by their fixed-camera and drone-based data-collection strategies compared to the comprehensive on-vehicle sensors used in nuScenes [18], Waymo [17], Lyft [19], and nuPlan [27]. However, while ETH [22], UCY [23], and INTERACTION [39] do not contain as many agents, they consistently provide the highest-density scenes (see Fig. 6), especially for pedestrians and bicycles. We compute agent density by dividing the number of agents in a scene by their overall bounding rectangle area, as in [25]. + +Each dataset supported by trajdata adopts different scenario lengths and corresponding agent observation durations. As can be seen in Fig. 7 AV datasets are comprised of scenarios with lengths ranging from $4s$ in INTERACTION [39] to $25s$ in Lyft Level 5 [19]. The peaks at the right of each AV dataset duration distribution are caused by the always-present ego-vehicle (for Vehicles) as well as other agents detected throughout the scene (common in steady traffic, parking lots, or at an intersection with stopped traffic and pedestrians waiting to cross). One can also see that Lyft Level 5 [19] agent detections are much shorter-lived compared to other AV datasets' relatively uniform distributions (Waymo [17], nuScenes [18], and nuPlan [27]). This could be caused by Lyft's + +![](images/d881a1cd17df4ba397330f33e34b75ac0faa6c74019c78df482693a64679269e.jpg) +Figure 5: Left: Number of agents present per timestamp and scene. Right: Maximum number of agents present at the same time per scene. + +![](images/092a088fbe98c04da1a4afbcb214c43e086f86f27e84845c297614a35188179a.jpg) +Figure 6: Agent density per timestep and scene. + +annotations being collected from an onboard perception system [19] (which are affected by noise and occlusions) vs human annotators [18] or autolabeling [27, 17] which can leverage data from past and future timesteps be more robust to such errors. We conduct additional comparisons between data collection methodologies in Section 4.3. + +Ego-Agent Distances. When developing AV perception systems, an important consideration is the sensor range(s) necessary to facilitate the desired prediction and planning horizons as well as provide advanced warning of critical situations (e.g., stopped traffic on a highway). In Fig. 8, we compare the distribution of ego-agent distances and find that, while nuScenes [18] and Lyft Level 5 [19] have long-tailed distributions extending past $200m$ , Waymo [17] and nuPlan [27] appear to have artificial cut-offs at $75 - 80m$ , potentially to maintain data quality by avoiding poor data from distant agents. However, it would be more useful to maintain distant detections and add uncertainty outputs from the autolabeler to support uncertain long-range detection research in addition to improving autolabeling. + +Mapped Areas. HD maps are a core component of many AV datasets, frequently leveraged in trajectory forecasting and motion planning research to provide scene context and geometric lane information (e.g., for global search-based planning and trajectory optimization). Current AV dataset maps are very large (see Table 2 in the appendix) and comprehensive, spanning multiple neighborhoods in different cities. However, not all HD maps are created equal, commonly differing along three axes: Area completeness, lane definitions, and traffic lights. While most AV datasets provide complete HD maps of neighborhoods, Waymo [17] differs by only providing local map crops per scenario without a common reference frame across scenarios [2]. This also significantly increases the storage requirements of Waymo [17] maps compared to other datasets. + +Lane definitions can also differ significantly between datasets, with intersections being a notable differentiator. For instance, the nuScenes dataset [18] does not annotate intersections fully, opting for only lane centerlines without associated edges (Fig. 2 shows an example). Lyft Level 5 [19] and nuPlan [27] both include full lane center and edge information for all possible motion paths through an intersection. Waymo [17] maps are unique in that they provide full lane center and boundary information, but there are many gaps in the associations between lane centerlines and boundaries, making it difficult to construct lane edge polylines or lane area polygons. As a result, we exclude Waymo maps from map-based analyses in this work. + +# 4.2 Motion Complexity + +Measuring the complexity of driving scenarios is an important open problem in the AV domain, with a variety of proposed approaches ranging from heuristic methods [25] to powerful conditional behavior + +![](images/321dba5f1b9b22ba2f28eca7be984058d0b952f4fed4e01be5724c948a9f5af5.jpg) +Figure 7: Distributions of the length of time agents are observed in each scene. + +![](images/6ca9af5615af36ef07a1adfe72456f5188bc42126655dd5d39b8eae838e3b516.jpg) + +![](images/51997e39bdd2c52f8b91228cfaf4357f3727a30e1edf6ab381604d248fd97968.jpg) + +![](images/bf237d8960bae0313d334fae0d4372bb6bdf4c6c1eb649edaced94c697db7cbd.jpg) + +![](images/3e63734efaaf404a0c0acdf978ae15c6bc6e7be97534c9ba4af0c3642a9b19a7.jpg) + +![](images/0e8c899009729487f7aeff98084bfba9a728f12330a3e7e4a037ea9d86a15378.jpg) +Figure 8: Distribution of distances between agents and data-collecting ego-vehicle in AV datasets. + +![](images/e1ac744692f645988aa7c556c1ffaec1f24fe8083a2430df9d7f0bb8b979c86d.jpg) + +![](images/b2707884397575d93f821868baf82a026e47f33e0a33b89b00dbd4c6c34699c5.jpg) + +![](images/d158b649575570025ea9e62c4a0bb370e4ade971c0806eaf9c9878b5f9cab927.jpg) + +![](images/c258f9d1e13864ee78abcd540335b48bfbae5d0d8f5567ff536e0c5a22058322.jpg) + +prediction models [43]. To avoid potential biases in analyzing datasets with an externally-trained model, we employ simple and interpretable heuristics similar to [25]. + +Motion Diversity. We first analyze distributions of dynamic agent quantities (e.g., speed, acceleration, jerk). As can be seen in Fig. 9, the majority of speed distributions have high peaks at zero (no motion). This is corroborated by Table 3 in the appendix, which shows that a significant portion of agents are stationary in many datasets, especially for nuScenes [18] (17.5%) and Waymo [17] (53.6%). After the initial peak, agent speed distributions drop sharply to a roughly uniform plateau (up to $20m/s$ for vehicles) before dropping completely around $30m/s$ (a common highway speed around the world). + +While SDD [40] and INTERACTION [39] have sensible vehicle speeds, their pedestrian speeds can be too high. Such high speeds may be caused by annotations near the edge of drone camera view or by rectification artifacts near the image border. Additionally, the very long-tailed distribution of Lyft [19]) and Waymo [17]) vehicle, pedestrian, and bicycle speeds (exceeding $60m / s$ ) show a remaining area of improvement for state-of-the-art AV perception systems and autolabeling pipelines. Comparisons of acceleration and jerk can be found in the appendix. Overall, from dynamic quantities alone, Waymo [17]) and Lyft [19] provide the most diversity in agent motion. If such long-tailed data is undesirable, the INTERACTION [39] dataset provides the most realistic set of vehicle speeds. + +Trajectory Nonlinearity. To analyze the spatial diversity of agent trajectories, we first compare each agent's heading to their initial timestep. As can be seen in Fig. [10] and reiterating earlier analyses, the vast majority of human movement is straight and linear ( $\Delta h = 0$ ). Moving away from the center, we also see repeated symmetric peaks at $\pm \frac{\pi}{2}$ (capturing left and right turns) and $\pm k\pi$ in some datasets. One possible reason for these periodic peaks in the distribution is an artifact of the autolabeling methods used in the datasets (since only datasets that autolabel sensor data are affected), another is that their respective scene geometries contain more roundabouts, cul-de-sacs, and repeated turns than other datasets (more detailed heading distributions can be found in the appendix). We can also see that pedestrians' distributions are more uniform as they do not have to adhere to rigid road geometry. + +Path Efficiency. Lastly, we also measure agent path efficiencies, defined as the ratio of the distance between trajectory endpoints to the trajectory length [25]. Intuitively, the closer to $100\%$ , the closer the trajectory is to a straight line. As can be seen in Fig. 15 in the appendix, most path efficiency distributions are uniformly distributed, with peaks near $100\%$ , echoing earlier straight-line findings. However, the INTERACTION [39] dataset is an outlier in that its agent trajectories are predominantly straight lines with much less curved motion than other AV and pedestrian datasets. + +# 4.3 Annotation Quality + +While analyzing datasets' true annotation accuracy would be best, neither we nor the original data annotators have access to the underlying real-world ground truth. As a proxy, we instead analyze the self-consistency of annotations in the form of incidence rates of collisions between agents, off-road driving, and uncomfortable high-acceleration events (using $0.4g$ as a standard threshold [44, 45]). + +![](images/9a24f61180bc3a82b32790fa60897f3d11cf4c6e7573d848371a328b0c63e01b.jpg) +Figure 9: Agent speed distributions per dataset and agent type. + +![](images/a40bf8bc3376b96d4090c88a9bf095d560b51cc1597c98633ff0ef04cbe40b1a.jpg) + +![](images/da68c665debbad0c9b2a33bd15430013e1f199510c08412d541c6fa85d7f983f.jpg) + +![](images/770743230c906a9e2bf910afcf1f0c21b7bd0b22ea60ceaf84a472e55df6ddb4.jpg) + +![](images/628bcdd77b4d2c677a09883bd18d7afe28a2712e2de37b02363dedfadfe98b66.jpg) + +![](images/b469f721061c9fabaef71cea650552bd0e5bf7ca51c91cc7646cec9598056d42.jpg) +Figure 10: Changes in heading relative to an agent's first timestep. + +![](images/e66a0fa7e2c9da96c1747c9cd3569265b4a52fcdf9bea910a10fd2387b5579f6.jpg) + +![](images/8c17fe52798b6cf435404ae2b14088ee85b4296ebde08a8265d41c41d8490522.jpg) + +![](images/ee807c481a152d96479767302b6003bb33f4df97dd6860b267e481be2dcc455b.jpg) + +![](images/9b18be54760711bfa1e15cadb6e06e5ddff15ed04fe97ee12e1aa0e3e3946dc2.jpg) + +Virtually all observed agent data is free of collisions and off-road driving, save for rare one-offs (e.g., the INTERACTION dataset contains a minor car accident [39]). We denote bounding box intersections between agents as collisions, and agent center-of-mass exiting the road boundary as off-road driving. Collisions typically indicate errors in bounding box annotations, whereas off-road driving can indicate erroneous bounding box dimensions, missing map coverage, or harsh driving that, e.g., cuts corners during a right turn. + +As can be seen in Fig. [11] (left), most vehicles in datasets experience collision rates below $5\%$ . Of particular note is the fact that state-of-the-art autolabeling systems (e.g., used in Waymo [17]) are nearly matching the accuracy of human annotations (e.g., used in nuscenes [18]) in terms of resulting collision rates. However, detecting agents from a near-ground perspective (even with 3D LiDAR) is a very challenging task, and current performance still lags behind high altitude viewpoints. In particular, The INTERACTION [39] dataset achieves orders of magnitude lower vehicle collision, off-road, and harsh acceleration rates owing to its drone-based data collection strategy. In theory, SDD [40] should enjoy a similar advantage, but it only provides axis-aligned bounding box annotations (which overestimate agent extents) and Stanford's college campus contains much more interactive agents than other urban environments. More generally, the notion of bounding box intersections as collisions does not transfer exactly to pedestrians as they can enter/exit cars and walk in close groups, and further study is needed to robustly distinguish between errant motion and normal interactive motion. + +In Fig. [11](middle), we find that vehicles in general experience very few ( $< 1\%$ ) harsh acceleration events, with Waymo [17], Lyft [19], and nuScenes [18] all having the highest incidence, commensurate with their earlier-discussed long-tail acceleration distributions. Lastly, we find in Fig. [11](right) that the INTERACTION [39] and nuPlan [27] agent annotations are well-aligned onto their maps, whereas nuScenes [18] suffers from poor map coverage away from main roads (there are many annotated parked cars next to the main road) and Lyft [19] suffers from high false positive detections next to the main road (the majority of which take the Unknown class). + +# 5 Conclusions and Recommendations + +The recent releases of large-scale human trajectory datasets have significantly accelerated the field of AV research. However, their unique data formats and custom developer APIs have complicated multi-dataset research efforts (e.g., [20, 21]). In this work, we present trajdata, a unified trajectory data loader that aims to harmonize data formats, standardize data access APIs, and simplify the process of using multiple AV datasets within the AV research community with a simple, uniform, and efficient data representation and development API. We used trajdata to comprehensively compare existing trajectory datasets, finding that, in terms of annotation self-consistency, drone-based data collection methods yield significantly more accurate birds-eye view bounding box annotations than even state-of-the-art AV perception stacks with LiDAR (albeit with much less spatial coverage), modern + +![](images/5d1a4b1c264ca8172c04f66b42ddb90f984695098f6ab92c2b95600a355cd32e.jpg) +Figure 11: Self-consistency failure rates per dataset and agent type, in the form of collision (left), high vehicle acceleration (middle), and off-road (right) rates. + +![](images/c6cb7421603f5194fc9bb3f94c8618f71edb645cce4742a1605acda4513fc04e.jpg) + +![](images/0c4cd106c9ed76fbabf9c755e6a468be873f80980ed586b8891d99d5b29ec463.jpg) + +autolabeling pipelines are nearing human annotation performance, and smaller-scale pedestrian datasets can still be useful for investigations requiring high-agent-density scenarios. + +As concrete recommendations, we saw that some datasets artificially limit the distance agents are autolabeled. Instead, it would be more useful to the long-range detection community to remove such restrictions, but add autolabeler-output uncertainties to long-range detections, supporting uncertain perception research along the way. Further, incorporating explicit self-consistency checks within autolabeling pipelines and catching, e.g., collisions, prior to release can both improve the autolabeling method as well as the resulting data labels. + +More broadly, providing researchers with access to more data comprised of various agent types from diverse geographies should help in modeling rare agent types and behaviors, in addition to aiding in the generalization of methods to multiple geographies. However, as we have seen in prior sections, there is an overwhelming bias towards straight line driving, and one capability missing from trajdata is the ability to (re)balance data on a semantic (behavioral) level. Finally, even if lower-level trajectory classes (e.g., driving straight, turning left/right, slowing down, speeding up, etc) are balanced, an important higher-level consideration during original dataset curation time is to ensure that AV datasets explore all geographic regions within an environment, and not only those of certain socioeconomic statuses or transportation access. + +Future work will address the current limitations of trajdata (e.g., expanding the number of supported datasets and new capabilities such as geometric map element associations to support Waymo-like map formats [17]). Further, incorporating sensor data would also enable perception research as well as joint perception-prediction-planning research, an exciting emerging AV research field. + +# Acknowledgments and Disclosure of Funding + +We thank all past and present members of the NVIDIA Autonomous Vehicle Research Group for their code contributions to trajdata and feedback after using it in projects. We additionally thank Leon De Andrade, Alex Naumann, and Stepan Konev for their contributions to trajdata on GitHub. + +# References + +[1] A. Rudenko, L. Palmieri, M. Herman, K. M. Kitani, D. M. Gavrila, and K. O. Arras, "Human motion trajectory prediction: A survey," Int. Journal of Robotics Research, vol. 39, no. 8, pp. 895-935, 2020. +[2] General Motors, "Self-driving safety report," 2018, Available at https://www.gm.com/content/dam/company/docs/us/en/gmcom/gmsafetyreport.pdf. +[3] Uber Advanced Technologies Group, "A principled approach to safety," 2020, Available at https://uber.app.box.com/v/UberATGSafetyReport. +[4] Lyft, "Self-driving safety report," 2020, Available at https://2eg1kz1onwfq1djllo2xh4bb-wpengine.netdna-ssl.com/wp-content/uploads/2020/06/Safety_Report_2020.pdf +[5] Waymo, "Safety report," Waymo LLC, 2021, Available at https://waymo.com/safety/safety-report. + +[6] Argo AI, "Developing a self-driving system you can trust," Apr. 2021, Available at https://www.argo.ai/wp-content/uploads/2021/04/ArgoSafetyReport.pdf +[7] Motional, "Voluntary safety self-assessment," 2021, Available at https://drive.google.com/file/d/1JjfQByU_hWvSfkWzQ8PK2ZOZfVCqQGDB/view. +[8] Zoox, "Safety report volume 2.0," 2021, Available at https://zoox.com/safety/. +[9] NVIDIA, "Self-driving safety report," 2021, Available at https://images.nvidia.com/content/ self-driving-cars/safety-report/auto-print-self-driving-safety-report-2021-update.pdf. +[10] T. Kruse, A. K. Pandey, R. Alami, and A. Kirsch, "Human-aware robot navigation: A survey," Robotics and Autonomous Systems, vol. 61, no. 12, pp. 1726-1743, 2013. +[11] S. F. Chik, C. F. Yeong, E. L. M. Su, T. Y. Lim, Y. Subramaniam, and P. J. H. Chin, “A review of social-aware navigation frameworks for service robot in dynamic human environments,” Journal of Telecommunication, Electronic and Computer Engineering, vol. 8, no. 11, pp. 41–50, 2016. +[12] P. A. Lasota, T. Fong, and J. A. Shah, "A survey of methods for safe human-robot interaction," Foundations and Trends in Robotics, vol. 5, no. 4, pp. 261–349, 2017. +[13] nuTonomy, “nuscenes prediction challenge,” https://www.nuscenes.org/prediction?externalData=all&mapData=all&modalities=Any, 2020. +[14] Lyft Level 5, "Lyft motion prediction for autonomous vehicles," https://www.kaggle.com/competitions/lyft-motion-prediction-autonomous-vehicles, 2020. +[15] Waymo, "Waymo open dataset motion prediction challenge," https://waymo.com/open/challenges/, 2021. +[16] Yandex Research, "Shifts challenge: Robustness and uncertainty under real-world distributional shift," https://research.yandex.com/shifts, 2021. +[17] S. Ettinger, S. Cheng, B. Caine, C. Liu, H. Zhao, S. Pradhan, Y. Chai, B. Sapp, C. Qi, Y. Zhou, Z. Yang, A. Chouard, P. Sun, J. Ngiam, V. Vasudevan, A. McCauley, J. Shlens, and D. Anguelov, "Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset," in IEEE Int. Conf. on Computer Vision, 2021. +[18] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, "nuScenes: A multimodal dataset for autonomous driving," in IEEE Conf. on Computer Vision and Pattern Recognition, 2020. +[19] J. Houston, G. Zuidhof, L. Bergamini, Y. Ye, A. Jain, S. Omari, V. Iglovikov, and P. Ondruska, "One thousand and one hours: Self-driving motion prediction dataset," in Conf. on Robot Learning, 2020. +[20] T. Gilles, S. Sabatini, D. Tsishkou, B. Stanciulescu, and F. Moutarde, "Uncertainty estimation for cross-dataset performance in trajectory prediction," in IEEE Int. Conf. on Robotics and Automation Workshop on Fresh Perspectives on the Future of Autonomous Driving, 2022. +[21] B. Ivanovic, J. Harrison, and M. Pavone, "Expanding the deployment envelope of behavior prediction via adaptive meta-learning," in IEEE Int. Conf. on Robotics and Automation, 2023. +[22] S. Pellegrini, A. Ess, K. Schindler, and L. v. Gool, "You'll never walk alone: Modeling social behavior for multi-target tracking," in IEEE Int. Conf. on Computer Vision, 2009. +[23] A. Lerner, Y. Chrysanthou, and D. Lischinski, “Crowds by example,” Computer Graphics Forum, vol. 26, no. 3, pp. 655–664, 2007. +[24] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, "Social LSTM: Human trajectory prediction in crowded spaces," in IEEE Conf. on Computer Vision and Pattern Recognition, 2016. + +[25] J. Amirian, B. Zhang, F. V. Castro, J. J. Baldelomar, J.-B. Hayet, and J. Pettré, “OpenTraj: Assessing prediction complexity in human trajectories datasets,” in Asian Conference on Computer Vision, 2020. +[26] A. Malinin, N. Band, Y. Gal, M. Gales, A. Ganshin, G. Chesnokov, A. Noskov, A. Ploskonosov, L. Prokhorenkova, I. Provilkov, V. Raina, V. Raina, D. Roginskiy, M. Shmatova, P. Tiges, and B. Yangel, “Shifts: A dataset of real distributional shift across multiple large-scale tasks,” in Conf. on Neural Information Processing Systems Datasets and Benchmarks Track, 2021. [Online]. Available: https://openreview.net/forum?id=qM45LHaWM6E +[27] H. Caesar, J. Kabzan, K. S. Tan, W. K. Fong, E. Wolff, A. Lang, L. Fletcher, O. Beijbom, and S. Omari, "nuPlan: A closed-loop ML-based planning benchmark for autonomous vehicles," 2021, Available at https://arxiv.org/abs/2106.11810. +[28] P. Kothari, S. Kreiss, and A. Alahi, "Human trajectory forecasting in crowds: A deep learning perspective," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 7386-7400, 2022. +[29] A. Rudenko, L. Palmieri, W. Huang, A. J. Lilienthal, and K. O. Arras, "The atlas benchmark: An automated evaluation framework for human motion prediction," in IEEE Int. Conf. on Robot and Human Interactive Communication, 2022. +[30] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, "Automatic differentiation in PyTorch," in Conf. on Neural Information Processing Systems - Autodiff Workshop, 2017. +[31] D. Rempe, Z. Luo, X. B. Peng, Y. Yuan, K. Kitani, K. Kreis, S. Fidler, and O. Litany, "Trace and Pace: Controllable pedestrian animation via guided trajectory diffusion," in IEEE Conf. on Computer Vision and Pattern Recognition, 2023. +[32] D. Xu, Y. Chen, B. Ivanovic, and M. Pavone, "BITS: Bi-level imitation for traffic simulation," in IEEE Int. Conf. on Robotics and Automation, 2023. +[33] Z. Zhong, D. Rempe, D. Xu, Y. Chen, S. Veer, T. Che, B. Ray, and M. Pavone, "Guided conditional diffusion for controllable traffic simulation," in IEEE Int. Conf. on Robotics and Automation, 2023. +[34] F. Christianos, P. Karkus, B. Ivanovic, S. V. Albrecht, and M. Pavone, "Planning with occluded traffic agents using bi-level variational occlusion models," in IEEE Int. Conf. on Robotics and Automation, 2023. +[35] Y. Chen, P. Karkus, B. Ivanovic, X. Weng, and M. Pavone, "Tree-structured policy planning with learned behavior models," in IEEE Int. Conf. on Robotics and Automation, 2023. +[36] The Apache Software Foundation, "Apache arrow," 2023, Available at https://github.com/apache/arrow +[37] S. Gillies, C. van der Wel, J. Van den Bossche, M. W. Taves, J. Arnott, B. C. Ward, and others, "Shapely," 2023, Available at https://github.com/shapely/shapely. +[38] Google Inc., "Protocol buffers - google's data interchange format," 2023, Available at https://github.com/protocolBuffers/protobuf +[39] W. Zhan, L. Sun, D. Wang, H. Shi, A. Clausse, M. Naumann, J. Kümmerle, H. Königshof, C. Stiller, A. de La Fortelle, and M. Tomizuka, “INTERACTION Dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps,” 2019, Available at https://arxiv.org/abs/1910.03088. +[40] A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, "Learning social etiquette: Human trajectory prediction in crowded scenes," in European Conf. on Computer Vision, 2016. +[41] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. (2016) OpenAI Gym. Available at https://arxiv.org/abs/1606.01540. + +[42] Bureau of Transportation Statistics, "National Transportation Statistics. Number of U.S. Aircraft, Vehicles, Vessels, and Other Conveyances," U.S. Dept. of Transportation, Tech. Rep., 2023. +[43] E. Tolstaya, R. Mahjourian, C. Downey, B. Varadarajan, B. Sapp, and D. Anguelov, "Identifying driver interactions via conditional behavior prediction," in IEEE Int. Conf. on Robotics and Automation, 2021. +[44] B. G. Simons-Morton, J. D. Ouimet, J. Wang, S. G. Klauer, S. E. Lee, and T. A. Dingus, “Hard braking events among novice teenage drivers by passenger characteristics,” Driving Assessment Conference, vol. 5, pp. 236-242, 6 2009. [Online]. Available: https://pubs.lib.uowa.edu/driving/article/id/28044/ +[45] S. G. Klauer, T. A. Dingus, V. L. Neale, J. D. Sudweeks, and D. J. Ramsey, “Comparing real-world behaviors of drivers with high versus low rates of crashes and near-crashes,” National Highway Traffic Safety Administration, Tech. Rep. DOT HS 811 091, 2009. \ No newline at end of file diff --git a/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/images.zip b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fb9e98fe7697dcbb30d1453520f0e83393d62dc5 --- /dev/null +++ b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6df49247ebbc33c7359ae595b489e33ac7d8790688672c95d9dae9f3b903593d +size 429372 diff --git a/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/layout.json b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c4a0611c80b029631c0a2459b4e1b834ef5c0a4c --- /dev/null +++ b/trajdataaunifiedinterfacetomultiplehumantrajectorydatasets/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74a3476cf73a0cf9c86835a6d3d9332de7ce092a31e9408671b656562351ceb1 +size 297652 diff --git a/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_content_list.json b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a2318db33b9a20f4bae325b4c9aace975d413fc8 --- /dev/null +++ b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03d4a8246da1e58975a1700fc7a2193756543a7833a819655a2418764dbd29ff +size 82804 diff --git a/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_model.json b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..980b0919f4039cae1856e5966c8b9f1ba7e0625d --- /dev/null +++ b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2336bc8226eba7108103d459ae5c00ffbb8f6b60fdfb2f608bf4e992937d1c0a +size 102818 diff --git a/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_origin.pdf b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..694bb272714b738d8122d4bb1583c03c4c1c7d55 --- /dev/null +++ b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/87c002e3-bc86-4340-8296-e744e39e0d3b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6084830379102115d0beda4586ee07ee64645ec84175aac01811e3014a102d6 +size 3860189 diff --git a/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/full.md b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2e64439756adc165e359ea3ec7c2f9eb7f705aad --- /dev/null +++ b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/full.md @@ -0,0 +1,402 @@ +# "Why Not Looking backward?" A Robust Two-Step Method to Automatically Terminate Bayesian Optimization + +Shuang Li + +Control and Simulation Center, Harbin Institute of Technology, China. National Key Laboratory of Modeling and Simulation for Complex Systems, China. ShuangLi.hit@outlook.com + +Ke Li + +Department of Computer Science University of Exeter, EX4 4RN, Exeter, UK. k.li@exeter.ac.uk + +Wei Li* + +Control and Simulation Center, Harbin Institute of Technology, China. National Key Laboratory of Modeling and Simulation for Complex Systems, China. frank@hit.edu.cn + +# Abstract + +Bayesian Optimization (BO) is a powerful method for tackling expensive black-box optimization problems. As a sequential model-based optimization strategy, BO iteratively explores promising solutions until a predetermined budget, either iterations or time, is exhausted. The decision on when to terminate BO significantly influences both the quality of solutions and its computational efficiency. In this paper, we propose a simple, yet theoretically grounded, two-step method for automatically terminating BO. Our core concept is to proactively identify if the search is within a convex region by examining previously observed samples. BO is halted once the local regret within this convex region falls below a predetermined threshold. To enhance numerical stability, we propose an approximation method for calculating the termination indicator by solving a bilevel optimization problem. We conduct extensive empirical studies on diverse benchmark problems, including synthetic functions, reinforcement learning, and hyperparameter optimization. Experimental results demonstrate that our proposed method saves up to $\approx 80\%$ computational budget yet is with an order of magnitude smaller performance degradation, comparing against the other peer methods. In addition, our proposed termination method is robust in terms of the setting of its termination criterion. + +# 1 Introduction + +"Nature does not hurry, yet everything is accomplished." — Lao Tzu + +In this paper, we consider the black-box optimization problem (BBOP) defined as follows: + +$$ +\underset {\mathbf {x} \in \Omega} {\text {m a x i m i z e}} f (\mathbf {x}), \tag {1} +$$ + +![](images/cca390747e4818979f818e127f5cb294581998ca0a160c5b4c8f3e1633e9cd5a.jpg) +Figure 1: Trajectories of termination criteria used in [22], [28] and [24] on Ackley and Levy function where $n = 1$ . Results are collected from 21 independent runs of vanilla BO while the mean value of termination indicator of each termination criterion is plotted as the solid line associated with the confidence interval. Please refer to Section 3.2 for a description of these termination criteria, as well as the meaning of $\kappa_{\mathrm{PI}}$ , $\kappa_{\mathrm{EI}}$ and $\kappa_{\mathrm{diff}}$ . + +![](images/055402a32f50bc49d312e1f13f4dc45a03de4e99fbb318d34b5bf6f541ca3034.jpg) + +![](images/9b706e645f9444d421cca143f180ca3068ac6fa481586cdee59722a6e48405a4.jpg) + +where $\mathbf{x} = (x_{1},\dots ,x_{n})^{\top}$ is a decision vector (variable), $\Omega = [x_i^{\mathrm{L}},x_i^{\mathrm{U}}]_{i = 1}^n\subset \mathbb{R}^n$ represents the search space, and $f:\Omega \to \mathbb{R}$ corresponds to the attainable set in the objective space. In real-world scenarios, function evaluations (FEs) of $f(\mathbf{x})$ can be costly, giving rise to expensive BBOPs. Bayesian optimization (BO) has emerged as one of the most effective methods for addressing expensive BBOPs. BO is a sequential model-based optimization technique consisting of two iterative steps: i) employing limited expensive FEs to construct a surrogate model of the physical objective function, such as a Gaussian process (GP) model [35]; and ii) selecting the next point of interest for costly FE by optimizing an acquisition function, e.g., probability of improvement (PI) [18], expected improvement (EI) [16], and upper confidence bound (UCB) [31]. Numerous theoretical and methodological advancements have been made in BO. Interested readers can refer to comprehensive survey papers [29, 11] and a recent textbook [13] for further information. + +Nevertheless, the question of when to terminate the search process of BO remains a largely underexplored area in the literature. At present, the most prevalent termination criterion is a pre-specified budget, such as the number of FEs or wall-clock time. Though intuitive, this approach neglects the search dynamics inherent to different BBOPs. As a result, this strategy is rigid while it does not offer a general rule for determining an appropriate budget across various problem settings. If the budget is too small, BO may terminate prematurely, yielding a suboptimal solution. On the contrary, an excessive budget may lead to wasted computational resources. Another simple termination method involves stopping BO if the current best solution remains unchanged for a predetermined number of consecutive FEs. However, as highlighted by [24], this strategy also fails to consider the observed data during the sequential model-based optimization process and relies on a pre-defined threshold. + +Beyond the aforementioned 'naive' approaches, a limited number of dedicated efforts have been made to address the termination of BO. One notable method involves monitoring the progress of BO by termination indicators, such as the maximum of EI [28, 16] or PI [22]. In this approach, BO is terminated when the corresponding termination indicator falls below a pre-specified threshold. Very recently, Makarova et al. proposed using the difference between the minimal of the lower confidence bound (LCB) and UCB as the termination indicator. As illustrated in Figure 1, we observe that all criteria used in these termination approaches exhibit significant oscillation during the optimization process. This can be attributed to: $i$ ) the stochastic nature of BO itself, and $ii$ ) numerical errors arising from the non-convex optimization of acquisition functions. Furthermore, as shown in Figures 1(a) and (b), the variation range of the same criterion can differ substantially when addressing problems with distinct fitness landscapes. These factors make determining a universally applicable threshold in practice challenging, resulting in fragile and less intuitive termination criteria compared to simply establishing a budget. Additionally, we find that these termination criteria are 'myopic', as decision-making is based solely on the observations at the current step, leading to a lagged termination. For instance, consider the selected samples shown in Figure 2; it is difficult, if not impossible, to determine when to terminate BO until $t = 20$ . However, if we look backward to $t = 5$ , it becomes evident that BO is likely to converge by $t = 10$ . + +Our contributions. In light of the aforementioned challenges, this paper proposes a novel termination method for BO that proactively detects whether the search is located in a convex region of $-f(\mathbf{x})$ by examining previously observed samples. BO is terminated if the local regret within this convex region falls below a predetermined threshold. To improve numerical stability, we introduce an approx- + +![](images/54e951de7acdf4f3efffffa3de7ad7c579ef44e0118d05e598409da2e671d584.jpg) +Figure 2: Search dynamics of vanilla BO on the Ackley function $(n = 1)$ at different time steps after the initialization. In particular, $t = 5$ indicates five new samples are collected after the initialization. + +![](images/5450dca40eeac4045ff6595d4b782ad938216e07e8003b2df24c8c6f182eb107.jpg) + +![](images/dbf4f47bb03a3036b9bf87085c37a594b0a7b8e673d03f059d3b4441c2b7d680.jpg) + +imation method for calculating the termination indicator by solving a bilevel optimization problem. Our proposed termination method is simple, yet it offers theoretical guarantees. To demonstrate its effectiveness, we compare the performance of our proposed method against four peer methods on a variety of benchmark problems, encompassing synthetic functions, reinforcement learning, and hyperparameter optimization. + +# 2 Proposed Method + +This section starts with a gentle tutorial of vanilla BO. Then, we delineate the implementation of our proposed termination method, followed by a theoretical analysis at the end. + +# 2.1 Vanilla Bayesian Optimization + +As a gradient-free optimization method, BO comprises two major steps. The first step involves constructing a surrogate model based on GP to approximate the expensive objective function. Given a set of training data $\mathcal{D} = \{\langle \mathbf{x}^i,f(\mathbf{x}^i)\rangle \}_{i = 1}^N$ , GP learns a latent function $g(\mathbf{x})$ , such that $\forall \mathbf{x}\in \mathcal{D}$ , we have $f(\mathbf{x}) = g(\mathbf{x}) + \epsilon$ , where $\epsilon \sim \mathcal{N}(0,\sigma_{\epsilon}^{2})$ is an i.i.d. Gaussian noise. For each testing input vector $\mathbf{z}^{*}\in \Omega$ , the mean and variance of the target $f(\mathbf{z}^{*})$ are predicted as follows: + +$$ +\mu (\mathbf {z} ^ {*}) = \mathbf {k} ^ {* ^ {\top}} (K + \sigma_ {\epsilon} ^ {2} I) ^ {- 1} \mathbf {f}, +$$ + +$$ +\sigma^ {2} \left(\mathbf {z} ^ {*}\right) = \mathbf {k} \left(\mathbf {z} ^ {*}, \mathbf {z} ^ {*}\right) - \mathbf {k} ^ {* ^ {\top}} \left(K + \sigma_ {\epsilon} ^ {2} I\right) ^ {- 1} \mathbf {k} ^ {*}, \tag {2} +$$ + +where $X = (\mathbf{x}^1, \dots, \mathbf{x}^N)^\top$ and $\mathbf{f} = (f(\mathbf{x}^1), \dots, f(\mathbf{x}^N))^\top$ . $\mathbf{k}^*$ is the covariance vector between $X$ and $\mathbf{z}^*$ , and $K$ is the covariance matrix of $X$ . In this paper, we use the Matérn 5/2 kernel as the covariance function to measure the similarity between a pair of data points. The second step consists of an infill criterion based on the optimization of an acquisition function, which determines the next point of merit $\tilde{\mathbf{x}}^*$ to be evaluated by the actual expensive objective function: + +$$ +\tilde {\mathbf {x}} ^ {*} = \underset {\mathbf {x} \in \Omega} {\operatorname {a r g m a x}} f ^ {\mathrm {a c q}} (\mathbf {x}). \tag {3} +$$ + +where $f^{\mathrm{acq}}(\mathbf{x}) = \mu (\mathbf{x}) + \omega \sigma (\mathbf{x})$ is the widely used UCB [31] to facilitate our theoretical analysis. Specifically, the parameter $\omega >0$ , determined according to the confidence level set as 0.95 in this paper, controls the trade-off between exploration and exploitation. Subsequently, the next point of merit $\tilde{\mathbf{x}}^{*}$ is used to update the training dataset as $\mathcal{D} = \mathcal{D}\bigcup \{\tilde{\mathbf{x}}^{*}\}$ , and BO iterates between the two aforementioned steps sequentially until a termination criterion is met. The convergence of BO can be evaluated by regret: + +$$ +r = f \left(\mathbf {x} ^ {\star}\right) - f \left(\tilde {\mathbf {x}} ^ {\star}\right), \tag {4} +$$ + +where $\mathbf{x}^{\star}$ represents the ground truth global optimum and $\tilde{\mathbf{x}}^{\star} = \operatorname*{argmax}_{\mathbf{x}\in \mathcal{D}}f(\mathbf{x})$ denotes the current best-found solution. + +# 2.2 Proposed Termination Criterion + +Inspired by the observations illustrated in Figure 2, we propose a termination method that involves 'looking back' at the last $\tau > 1$ observed points in the dataset $\mathcal{D}$ , and storing these in a temporary archive, denoted as $\tilde{\mathcal{D}}$ . The termination criterion we propose is predicated on two primary conditions. + +Condition 1. The BO search process is deemed to have converged within a convex hull $\tilde{\Omega}$ if the following condition is satisfied: + +$$ +\sum_ {j = 1} ^ {\binom {\tau + 1} {2}} \mathbb {1} \left(\mu \left(\frac {\mathbf {x} + \mathbf {x} ^ {\prime}}{2}\right) \geq \frac {f (\mathbf {x}) + f \left(\mathbf {x} ^ {\prime}\right)}{2}\right) = \binom {\tau + 1} {2}, \tag {5} +$$ + +where $\mathbb{1}(\cdot)$ denotes the indicator function, returning 1 if the argument holds true and 0 otherwise. $\mathbf{x}$ and $\mathbf{x}'$ are points selected randomly and distinctively from $\tilde{\mathcal{D}}$ . The convex hull, $\tilde{\Omega} = [\tilde{x}_i^{\mathrm{L}},\tilde{x}_i^{\mathrm{U}}]_{i = 1}^n$ , is a subset of $\Omega$ , where $\tilde{x}_i^{\mathrm{L}} = \operatorname*{argmin}_{\mathbf{x}\in \tilde{\mathcal{D}}}x_i$ and $\tilde{x}_i^{\mathrm{U}} = \operatorname*{argmax}_{\mathbf{x}\in \tilde{\mathcal{D}}}x_i$ . + +Condition 2. Assuming Condition 1 is satisfied, and $\tilde{\mathbf{x}}$ denotes the most recently observed point in $\mathcal{D}$ , we calculate the local regret $\tilde{r}$ as follows: + +$$ +\tilde {r} = \mu (\dot {\mathbf {x}}) - \mu (\tilde {\mathbf {x}}) + \omega (\sigma (\dot {\mathbf {x}}) + \sigma (\tilde {\mathbf {x}})), \tag {6} +$$ + +where $\dot{\mathbf{x}} = \operatorname*{argmax}_{\mathbf{x}\in \tilde{\Omega}}\mu (\mathbf{x})$ and $\ddot{\mathbf{x}} = \operatorname*{argmax}_{\mathbf{x}\in \tilde{\Omega}}\sigma^2 (\mathbf{x})$ . The BO process terminates if the following inequality is satisfied: + +$$ +\frac {\tilde {r}}{\omega \sigma_ {\epsilon}} \leq \eta_ {\mathrm {l b}}, \tag {7} +$$ + +where $\frac{\tilde{r}}{\omega\sigma_{e}}$ is used as the termination indicator, denoted as $\kappa_{\mathrm{lb}}$ , and $\eta_{\mathrm{lb}}$ is a predetermined threshold. + +Remark 1. The inequality within the indicator function $\mathbb{1}(\cdot)$ in equation (5) is derived from Jensen's inequality [4], which yields a convex function: + +$$ +- f \left(\alpha \mathbf {x} + (1 - \alpha) \mathbf {x} ^ {\prime}\right) \leq - \alpha f (\mathbf {x}) - (1 - \alpha) f \left(\mathbf {x} ^ {\prime}\right), \tag {8} +$$ + +where $\alpha \in [0,1]$ and $\mathbf{x},\mathbf{x}' \in \tilde{\Omega}$ . In order to avoid the necessity of additional function evaluations when computing $f\left(\frac{\mathbf{x} + \mathbf{x}'}{2}\right)$ , we substitute $\mu\left(\frac{\mathbf{x} + \mathbf{x}'}{2}\right)$ into equation (5). + +Remark 2. In equation (6), we employ the widely-used L-BFGS algorithm [6] to compute $\dot{\mathbf{x}}$ and $\ddot{\mathbf{x}}$ . To ensure numerical stability, we suggest the following strategies for initializing the algorithm and defining its termination criterion: + +1. For $\dot{\mathbf{x}}$ , $L$ -BFGS is initialized at a point randomly selected from $\tilde{\Omega}$ . The algorithm terminates when $\| \bigtriangledown \mu (\mathbf{x})\| _2\leq \lambda$ . In our work, we set $\lambda = 10^{-6}$ , following Proposition 1. +2. For $\ddot{\mathbf{x}}$ , $L$ -BFGS is initialized at the point $\underset {\mathbf{x}\in \tilde{\Omega}}{\operatorname{argmax}}\underline{\sigma}^2 (\mathbf{x})$ , where $\underline{\sigma}^2 (\mathbf{x})$ denotes the lower bound of $\sigma^2 (\mathbf{x})$ over $\tilde{\Omega}$ . The termination criterion is $\| \bigtriangledown \sigma^2 (\mathbf{x})\| _2\leq \lambda$ , as per Proposition 2. + +Remark 3. Considering equation (7), given that $\frac{\mu(\dot{\mathbf{x}}) - \mu(\ddot{\mathbf{x}})}{\omega\sigma_{\epsilon}}\geq 0$ and $\frac{\sigma(\dot{\mathbf{x}}) + \sigma(\ddot{\mathbf{x}})}{\sigma_{\epsilon}}\geq 2$ , we deduce that $\eta_{\mathrm{lb}}\geq 2$ . The upper bound of $\eta_{\mathrm{lb}}$ is empirically determined, as detailed in Section 4.1. + +Remark 4. When the GP model is overfitting, BO tends to converge within the local region of the current best solution. In this case, both Condition 1 and Condition 2 are easily met while BO will be terminated prematurely. On the other hand, when the model is underfitting, BO will explore $\Omega$ in a random manner. In this case, satisfying Condition 1 becomes challenging, and BO will face the risk of failing to be terminated. Therefore, we designed three mitigation strategies: 1) restrict the lengthscale to [0.05, 200] during GP training to prevent lengthscales from becoming excessively large or small; 2) normalize the input of training data to [0, 1]; and 3) standardize the output of the training data by centering it on the mean and scaling it by the variance. + +Proposition 1. Consider $\forall \mathbf{x} \in \tilde{\Omega}$ , where $-\mu(\mathbf{x})$ represents a convex function. If $\|\nabla \mu(\mathbf{x})\|_2 \leq \lambda$ , we can establish: + +$$ +\mu (\dot {\mathbf {x}}) - \mu (\mathbf {x}) \leq \xi , \tag {9} +$$ + +where $\lambda = (2m_1\xi)^{1 / 2}$ , $\xi$ is a positive constant, and $m_{1}$ denotes the strong convexity parameter of $-\mu (\mathbf{x})$ [4]. + +Lemma 1. Assume the $GP$ employs a stationary kernel $k(\cdot, \cdot)$ . For $\forall \mathbf{x} \in \tilde{\Omega}$ , the lower bound of $\sigma^2(\mathbf{x})$ is given by: + +$$ +\underline {{\sigma}} ^ {2} (\mathbf {x}) = k (\mathbf {x}, \mathbf {x}) + c \sum_ {i = 1} ^ {| \mathcal {D} |} k ^ {2} \left(\mathbf {x}, \mathbf {x} ^ {i}\right), \tag {10} +$$ + +where $c < 0$ is a constant and $\mathbf{x}^i\in \mathcal{D}$ for $i\in \{1,\dots ,|\mathcal{D}|\}$ . + +Lemma 2. Given Lemma 1, determining $\underset{\mathbf{x} \in \tilde{\Omega}}{\operatorname{argmax}} \underline{\sigma}^2(\mathbf{x})$ is equivalent to solving the following bilevel optimization problem: + +$$ +\begin{array}{l} \underset {\mathbf {x} \in \tilde {\Omega}} {\mathrm {m i n i m i z e}} d (\mathbf {x}, \mathbf {x} ^ {1}, \mathbf {x} ^ {2}) = \| \mathbf {x} - \mathbf {x} ^ {1} \| _ {2} ^ {2} + \| \mathbf {x} - \mathbf {x} ^ {2} \| _ {2} ^ {2} \\ \text {s u b j e c t} \quad \left\{\mathbf {x} ^ {1}, \mathbf {x} ^ {2} \right\} \quad = \quad \operatorname {a r g m a x} \quad \| \mathbf {x} ^ {1} - \mathbf {x} ^ {2} \| _ {2} ^ {2}, \tag {11} \\ \mathbf {x} ^ {1}, \mathbf {x} ^ {2} \in \mathcal {D} \cap \tilde {\Omega} \\ \mathbf {x} ^ {1} \neq \mathbf {x} ^ {2}, \hat {\Omega} \cap \mathcal {D} = \emptyset \\ \end{array} +$$ + +where $\hat{\Omega} = [\hat{x}_i^{\mathrm{L}},\hat{x}_i^{\mathrm{U}}]_{i = 1}^n\subset \tilde{\Omega}$ , $\hat{x}_i^{\mathrm{L}} = \min (x_i^1,x_i^2)$ and $\hat{x}_i^{\mathrm{U}} = \max (x_i^1,x_i^2)$ . Given that the lower-level optimization can be addressed via exhaustive search, the analytical solution of (11) is given by $\hat{\mathbf{x}} = (\hat{x}_1^{\mathrm{L}} + \frac{\hat{x}_1^{\mathrm{U}} - \hat{x}_1^{\mathrm{L}}}{2},\dots ,\hat{x}_n^{\mathrm{L}} + \frac{\hat{x}_n^{\mathrm{U}} - \hat{x}_n^{\mathrm{L}}}{2})^\top$ + +Proposition 2. Leveraging Lemma 2, suppose $\underset{\mathbf{x} \in \bar{\Omega}}{\mathrm{minimize}} - \sigma^2(\mathbf{x})$ exhibits convexity in its local optimal regions, the following inequality is satisfied when $\| \nabla \sigma^2(\mathbf{x}) \|_2 \leq \lambda$ : + +$$ +\sigma^ {2} (\ddot {\mathbf {x}}) - \sigma^ {2} (\mathbf {x}) \leq \beta + \xi , \tag {12} +$$ + +where $\lambda = (2m_2\xi)^{1 / 2}$ , $\xi >0$ , $m_{2} > 0$ represents the strong convexity parameter of $-\sigma^2 (\mathbf{x})$ in its local optimal regions [4], and $\beta$ is constrained by $0\leq \beta \leq \sigma^{2}(\ddot{\mathbf{x}}) - \sigma^{2}(\hat{\mathbf{x}})$ . + +# 2.3 Theoretical Analysis of the Proposed Termination Criterion + +In this subsection, we delve into the theoretical underpinnings of the proposed termination method, focusing on the convergence of BO when the UCB is utilized as the acquisition function. + +Lemma 3. As per Srinivas et al., the optimization process in BO can be conceptualized as a sampling process from a GP. Hence, for any $\mathbf{x} \in \Omega$ , we have: + +$$ +\Pr \left(| f (\mathbf {x}) - \mu (\mathbf {x}) | \leq \omega \sigma (\mathbf {x})\right) > \delta , \tag {13} +$$ + +where $\delta > 0$ signifies the confidence level adhered to by the UCB. + +Corollary 1. Based on Lemma 3 and Condition 2, we deduce that: + +$$ +\Pr \left(f ^ {\mathrm {a c q}} \left(\tilde {\mathbf {x}} ^ {\star}\right) + \varepsilon \geq f \left(\mathbf {x} ^ {\star}\right)\right) > \delta , \tag {14} +$$ + +where $\varepsilon$ is a numerical error when optimizing the acquisition function, $\tilde{\mathbf{x}}^{\star} = \underset {\mathbf{x}\in \Omega}{\mathrm{argmax}}f^{\mathrm{acq}}(\mathbf{x})$ , and $\mathbf{x}^{\star}$ represents the true global optimum. Furthermore, + +$$ +0 \leq \varepsilon \leq \mu (\dot {\mathbf {x}}) + \omega \sigma (\ddot {\mathbf {x}}) - f ^ {\mathrm {a c q}} (\tilde {\mathbf {x}} ^ {\star}), \tag {15} +$$ + +where $\dot{\mathbf{x}},\ddot{\mathbf{x}}$ and $\tilde{\mathbf{x}}^{\star}$ are elements of $\tilde{\Omega}$ , while $\delta >0$ denotes the confidence level of the UCB. + +Theorem 1. Leveraging Corollary 1, when employing the termination method proposed in this paper, we deduce that the global regret bound of BO as: + +$$ +\Pr (r \leq 2 \omega \sigma (\tilde {\mathbf {x}} ^ {\star}) + \varepsilon) > \delta , \tag {16} +$$ + +where $\delta > 0$ signifies the confidence level associated with the UCB. + +Theorem 2. Building upon Condition 1 and Condition 2, and employing the termination method proposed in this paper, we establish the local regret bound of BO as: + +$$ +\Pr (f (\mathbf {x} ^ {\star}) - f (\mathbf {x}) \leq \tilde {r}) > \delta , \tag {17} +$$ + +where $\mathbf{x} \in \tilde{\Omega}$ , $\mathbf{x}^{\star}$ denotes the true global optimum in $\tilde{\Omega}$ , and $\delta > 0$ is the confidence level of the UCB. + +Remark 5. Drawing from Theorem 1 and Theorem 2, we observe that if $\varepsilon$ can be considered negligible when $\tilde{\mathbf{x}}^{\star}$ is accurately determined by optimizing the UCB, $\tilde{r}$ subsequently represents the upper bound of BO regret within the domain $\Omega$ . Conversely, if $\varepsilon$ cannot be disregarded, $\tilde{r}$ is posited as the upper bound of BO regret within the restricted domain $\tilde{\Omega}$ . + +# 3 Experimental Settings + +In this section, we present the experimental setup for our empirical study, which encompasses the benchmark test problems, the peer algorithms, and the performance metrics used for evaluation. + +# 3.1 Benchmark Problems + +We evaluate the performance of our proposed method on three types of benchmark problems. + +- Synthetic functions: We consider Ackley, Levy, and Schwefel functions [33] with $n \in \{2, 5, 10\}$ . The objective function $f(\mathbf{x})$ is contaminated by Gaussian noise $\zeta \sim \mathcal{N}(0.0, 0.2)$ . The maximal number of FEs is set to $N_{\mathrm{FE}} = 50n$ , with $5n$ allocated to initialization. +- Reinforcement learning (RL): We examine two RL tasks chosen from OpenAI Gym [5]: Lunar Lander with $n = 12$ and Swimmer with $n = 16$ . We set $N_{\mathrm{FE}} = 50n$ , with $5n$ FEs allocated to initialization. +- Hyperparameter optimization (HPO): We consider 5 HPO tasks picked up from the HPOBench [9] for tuning support vector machine (SVM) with $n = 2$ , multi-layer perceptron (MLP) with $n = 5$ , random forest with $n = 4$ and XGBoost with $n = 8$ . The computational budget is set the same as in the RL tasks. + +Note that, due to the use of termination criteria, it may not be necessary to exhaust the entire allocated computational budget to terminate BO. To ensure statistical significance, each experiment is independently conducted 21 times with different random seeds. + +# 3.2 Peer Algorithms + +As discussed in Section 1, the termination criterion for BO is an understudied topic in the literature. In our experiments, we compare our proposed method with the following four termination methods. + +- Naive method: This method ceases BO when $\tilde{\mathbf{x}}^{\star}$ stays unchanged for $\kappa_{\mathrm{n}}$ consecutive iterations. Here, $\kappa_{\mathrm{n}}$ is also the termination indicator. In our experiments, we test three settings of the thresholds $\eta_{\mathrm{n}}$ as 150, 337 and 524, respectively. +- Nguyen's method [28]: In each iteration of BO, the optimization of acquisition function produces the current optimal EI. By using this as the termination indicator, denoted as $\kappa_{\mathrm{EI}}$ , the Nguyen's method terminates BO when it falls below a predetermined threshold $\eta_{\mathrm{EI}}$ . In our experiments, we consider three settings of $\eta_{\mathrm{EI}}$ as 0.01, 0.04 and 0.06, respectively. +- Lorenz's method [22]: Analogous to the Nguyen's method, the Lorenz's method replaces the EI with PI as the termination indicator, denoted as $\kappa_{\mathrm{PI}}$ . In our experiments, the termination threshold $\eta_{\mathrm{PI}}$ is set as 0.07, 0.2 and 0.33, respectively. +- Makarova's method [24]: Similar to the previous two methods, the Makarova's method uses the difference between the lower and upper confidence bounds as the termination indicator, denoted as $\kappa_{\mathrm{diff}}$ . It terminates BO when $\kappa_{\mathrm{diff}} \leq \eta_{\mathrm{diff}}$ , a predetermined threshold and is set as 0.26, 0.62 and 0.97, respectively, in our experiments. +- Our proposed method: According to Condition 1 and Condition 2, our proposed method terminates BO when $\kappa_{\mathrm{lb}}$ falls below a predetermined threshold $\eta_{\mathrm{lb}}$ , which is set as 2.02, 2.05 and 2.08, respectively. Furthermore, we introduce a hyperparameter $\tau$ to control the number of observed samples being looked backward, which is set to $\tau = 10$ in our experiments. The code is available at https://github.com/COLA-Laboratory/OptimalStoping_NeurIPS2023. + +According to the aforementioned settings, it is evident that the naive method tends to delay termination when a large $\eta_{\mathrm{n}}$ is used. On the other hand, other methods may incur a delayed termination if a small threshold is used. Note that the choices of the corresponding termination thresholds and the sensitivity of $\tau$ are empirically examined in Sections 4.1 and 4.2. + +# 3.3 Performance Metrics + +In our experiments, we consider the following three performance metrics to measure the effectiveness of a termination method. + +- Empirical cumulative probability of a termination indicator: + +$$ +I _ {\mathrm {c d f}} = \frac {1}{N _ {\mathrm {F E}} \times 2 1} \sum_ {i = 0} ^ {N _ {\mathrm {F E}} \times 2 1} \mathbb {1} (\kappa \leq \tilde {\kappa} _ {i}), \tag {18} +$$ + +where $\tilde{\kappa}_i = \underline{\kappa} +\frac{(\bar{\kappa} - \underline{\kappa})\times i}{N_{\mathrm{FE}}\times 21}$ , and $i\in \{0,\dots ,N_{\mathrm{FE}}\times 21\}$ . For a given termination method, $\kappa$ represents its termination indicator as outlined in Section 3.2. The minimum and maximum + +![](images/3521c2a70ace1324464c86e5f412f005e573607b42392c5e92836bfaaa617b74.jpg) +Figure 3: Trajectories of $\mathrm{I}_{\mathrm{cdf}}$ collected on different benchmark problems. Here we only show some results without loss of generality, while full results can be found in the supplementary document. Different subplots are (a) our proposed method, (b) Naive method, (c) Nguyen's method, (d) Lorenz's method, and (e) Makarova's method, respectively. + +![](images/c27221ff8575cd340714eb9b36f57101e980a3985bff754d6fda52f8dab7da58.jpg) +Figure 4: Bar charts with error bars of normalized $\tilde{\kappa}_i$ for different termination methods when $\mathrm{I}_{\mathrm{cdf}}$ is set as 0.05, 0.1, 0.2, 0.3, 0.4, and 0.5 respectively. + +values of $\kappa$ , represented by $\underline{\kappa}$ and $\bar{\kappa}$ respectively, are determined across all 21 repeated experiments on each benchmark problem. If $\mathrm{I}_{\mathrm{cdf}}$ exhibits consistency across a range of benchmark problems, it implies that the threshold choice for the corresponding termination method is consistent and not dependent on the specific problem. + +- The relative computational cost: + +$$ +I _ {\text {c o s t}} = \frac {\tilde {N} _ {\mathrm {F E}}}{N _ {\mathrm {F E}}}, \tag {19} +$$ + +where $\tilde{N}_{\mathrm{FE}}$ is the number of FEs used by a termination criterion when early stopping occurs. A lower value of $\mathrm{I_{cost}}$ indicates a higher degree of computational budget saving. + +- The relative performance degradation incurred by early stopping: + +$$ +I _ {\text {p e r f}} = \frac {f (\bar {\mathbf {x}}) - f (\tilde {\mathbf {x}} ^ {\star})}{f (\bar {\mathbf {x}}) - f (\underline {{\mathbf {x}}})}, \tag {20} +$$ + +where $\bar{\mathbf{x}}$ and $\underline{\mathbf{x}}$ are the best and the worst solutions found by BO when consuming all $N_{\mathrm{FE}}$ FEs. $\tilde{\mathbf{x}}^{\star}$ signifies the best solution found when early stopping is prompted by a termination criterion. A smaller $\mathrm{I}_{\mathrm{perf}}$ value indicates less performance degradation resulting from the application of the corresponding termination criterion. + +# 4 Empirical Studies + +In this section, our experiments2 aim to investigate three aspects: $i$ ) the robustness of the termination threshold for different termination methods; $ii$ ) the trade-off between the computational budget saving versus the performance degradation; and $iii$ ) the sensitivity of $\tau$ in our proposed termination method. + +# 4.1 Robustness of the Selection of Termination Threshold + +In this subsection, we use the $\mathrm{I}_{\mathrm{cdf}}$ metric to scrutinize the threshold choice of various termination methods across different problems. As per equation (18), it is evident that $\mathrm{I}_{\mathrm{cdf}} \propto \tilde{\kappa}_i$ . As discussed earlier in Section 3.2, a large $\tilde{\kappa}_i$ can lead to premature early stopping. Consequently, we confine our analysis to instances where $\mathrm{I}_{\mathrm{cdf}} \leq 0.5$ . As shown in Figure 3, the trajectories of $\mathrm{I}_{\mathrm{cdf}}$ for our proposed method appear to converge, whereas those for the other methods diverge with different magnitudes. More specifically, as shown in Figure 3(a), $\tilde{\kappa}_i = 2$ can be regarded as a transition point where $\mathrm{I}_{\mathrm{cdf}} \geq 0.95$ if $\tilde{\kappa}_i \geq 2$ . This empirical observation corroborates the theoretical result derived + +![](images/e42c8729e2e08689058f3aa9380a6be7f780f263178b22e8628061e30cfb0b99.jpg) +(a) Proposed method + +![](images/552c467d2ba9f6b01d1dc9af23ec945fa9e56e4eac421b5d02ad13a4cd832454.jpg) +Number of FEs +(b) Naive method + +![](images/0b25f34a8d0036eb0e42f3e8174283637e6bcd40bf3afd63317c8194728db16c.jpg) +(c) Nguyen's method + +![](images/4202472798bcd6030bbb8999fb4216a3393b31560129a51e474970f9e7445924.jpg) +(d) Lorenz's method + +![](images/b2a95682428bb6955e894591637823f5f3cec72ce6c3886cdef8a3e5f07403cc.jpg) +(e) Makarova's method + +![](images/728c059787c7150325a0ffca31cccdda18e93726f178e1c8ac3e64b3df877c06.jpg) +Figure 5: Trajectories of different termination indicators versus the number of FEs during the BO process on Ackley $(n = 2)$ and HPO for SVM. +(a) + +![](images/6339c62c9c37cc70294b6f393c1671249c31c7d26b4730576da86ee1d390afad.jpg) +(a) +(b) + +(d) (e) +Figure 6: Bar charts with error bars of $\mathrm{I}_{\mathrm{cost}}$ and $\mathrm{I}_{\mathrm{perf}}$ obtained by using different settings of termination threshold suggested in Section 3.2, denoted as $\eta_1$ , $\eta_2$ and $\eta_3$ respectively. Subplots (a) to (e) correspond to our proposed, Naïve, Nguyen's, Lorenz's, and Makarova's methods respectively. + +![](images/c7a726c35c3641f926cd8b9794256e850fc8f7efbeef9a66b911a4ee4670eb12.jpg) +Number of FEs + +![](images/8e2f27c851fcf41bb7c06244154e35340defec37dfc6d51ec7246f9709391ca5.jpg) +Number of FEs +Figure 7: Trajectories of the regret of BO versus the number of FEs during the BO process on five selected problems. Full results can be found in the supplementary document. + +![](images/190fa18be5e7a7280ba279bcd6b4fdc66e22a87697695132b7adde748bc79cce.jpg) +Number of FEs + +![](images/7e1cfd392ab0b7c5714cb15864f18e47dc9be4d44f1e2bf9b7f7efbd7bce850d.jpg) +Number of FEs + +![](images/2009ef236cc0e662dcc71add7d3f6faa3840d12f4d1e5a3969e3b15c89e829e3.jpg) +Number of FEs + +in Condition 2. In contrast, there do not exist a consistent lower bound for the other termination methods. To further elucidate these observations, we plot the distributions of $\tilde{\kappa}_i$ when $\mathrm{I}_{\mathrm{cdf}}$ ranges from 0.05 to 0.5 in Figure 4. It is clear that the bar charts exhibit the least variation for our proposed method. For the naive method, $\tilde{\kappa}_i$ increases as $\mathrm{I}_{\mathrm{cdf}}$ grows. However, the bars for the other three methods show significant fluctuations, particularly for the Nguyen's and the Lorenz's methods. These observations are further substantiated by the trajectories of the termination indicators throughout the BO process, as shown in Figure 5. We present results for the Ackley and HPO for SVM problems here, while complete results are available in the supplementary document. These plots reveal that the trajectories for our proposed method converge to a certain threshold, while those for the other methods not only diverge but also differ significantly on different problems. Based on this discussion, we use $\mathrm{I}_{\mathrm{cdf}} = 0.05$ as the capping point to guide the selection of the termination threshold for different termination methods: $\eta_{\mathrm{lb}} \in [2, 2.1]$ , $\eta_{\mathrm{n}} \in [57, 617]$ , $\eta_{\mathrm{EI}} \in [3.8 \times 10^{-24}, 0.08]$ , $\eta_{\mathrm{PI}} \in [2 \times 10^{-21}, 0.39]$ , $\eta_{\mathrm{diff}} \in [0.09, 1.15]$ . In our experiments, we apply the Latin hypercube design method [26] to choose three settings as listed in Section 3.2. + +# 4.2 Computational budget saving versus performance degradation + +There is a trade-off when early terminating BO, i.e., the performance of BO can be compromised when using less FEs. In this subsection, we employ $\mathrm{I}_{\mathrm{cost}}$ and $\mathrm{I}_{\mathrm{perf}}$ to characterize such trade-off. From the comparison results shown in Figure 6 and Table 1, we can see that although the naïve method achieves the best $\mathrm{I}_{\mathrm{perf}}$ , it consumes almost all FEs. In contrast, our proposed method saves up to $\approx 80\%$ computation budget while the performance degradation is up to a order of magnitude smaller than the other three termination methods. As the trajectories of the regret of BO versus the number of FEs shown in Figure 7, we can see that the other three termination methods suffer from a premature early stopping. + +Table 1: The statistical comparison results of different termination methods on ${\mathrm{I}}_{\text{cost }}$ and ${\mathrm{I}}_{\text{perf }}$ . + +
MetricsThresholdsNaïve methodNguyen's methodLorenz's methodMakarova's methodProposed method
Icostη11(0)†0.1313(4.48E-1)‡0.1244(4.48E-1)‡0.7856(4.17E-1)†0.6206(3.17E-1)
η21(0)†0.1082(3.5E-2)‡0.1053(1.47E-2)‡0.1414(3.89E-2)‡0.3012(1.94E-1)
η30.8343(2.51E-1)†0.1048(9.01E-3)‡0.1044(4.20E-3)‡0.1313(3.33E-2)‡0.2209(1.12E-1)
Ip erfη10(0)‡0.0077(7.28E-2)†0.0067(7.71E-2)†0(1.56E-2)†0(6.81E-3)
η20(0)‡0.0614(1.08E-1)†0.0721(1.13E-1)†0.0167 (6.51E-2)†0(3.35E-2)
η30(0)‡0.0704(1.14E-1)†0.0978(1.18E-1)†0.0355 (8.08E-2)†0.0028(4.17E-2)
+ +$^\dagger$ denotes the performance of our proposed method is significantly better than the other peers according to the Wilcoxon's rank sum test at a 0.05 significance level; $^\ddagger$ denotes the opposite case. + +![](images/5fd460a5dfe569b15939c56a5391231ab9168bcbcd743c67c159a520e7e77dd8.jpg) +Figure 8: Bar charts with error bars of $\mathrm{I}_{\mathrm{cost}}$ and $\mathrm{I}_{\mathrm{perf}}$ when using $\tau \in \{2i\}_{i=1}^{9}$ in our proposed termination method. + +![](images/70968b03f1bd3dd2339c1f42d8f8de6eea74a3fc1cbed27b5c457f87d9b57d49.jpg) + +# 4.3 Parameter Sensitivity Study + +In this subsection, we investigate the sensitivity of our proposed termination method with respect to the parameter $\tau$ . We consider various settings of $\tau \in \{2i\}_{i=1}^{9}$ and repeat the experiments on all benchmark problems introduced in Section 3.1. The aggregated comparison results for $\mathrm{I_{cost}}$ and $\mathrm{I_{perf}}$ are illustrated as bar charts with error bars in Figure 8. Specifically, we present the results for $\eta_{\mathrm{lb}} = 2.05$ , while the complete results can be found in the supplementary document. The plots show that the choice of $\tau$ has minimal impact on the results, except for cases with $\tau = 2$ and $\tau = 4$ . This is reasonable, as the termination method may not utilize sufficient previous information when only considering a few observed samples. Additionally, we examine the scenario where the equality constraint in Condition 1, i.e., equation (5) is relaxed. The comparison results in Figure 8 reveal similar observations regarding the settings of $\tau$ . However, we also notice a slight performance degradation and more aggressive early stopping in this case. These findings demonstrate that the Condition 1 helps mitigate the risk of premature early stopping. + +# 5 Other Partially Related Works + +Despite the limited number of dedicated studies on termination criteria for BO, various efforts have been made to explore early stopping strategies in different contexts. + +The first category primarily focuses on detecting change points in sequential processes [34], with applications spanning various fields such as financial analysis [20], bioinformatics [7], and network traffic data analysis [23], among others. However, modeling the automatic termination of BO as a change point detection (CPD) problem may present several challenges. These include: 1) the absence of suitable stopping metrics that can provide signals for CPD in the optimization process of BO; 2) the unknown and uncertain nature of signal distribution, the number of change points, and change point consistency; 3) limited data available for CPD; and 4) the necessity to further evaluate change points in order to determine an appropriate moment for terminating BO. + +The second category primarily focuses on determining the statistically optimal stopping moment for generalized sequential decision-making processes [14, 13]. For instance, in the classical secretary problem, termination criteria are developed to identify the maximum of an unknown distribution with minimal cost through sequential search [12]. They typically establish relationships between the costs and rewards of decision-making using cost coefficients [8, 2, 3], unknown observation costs [15, 37, 30, 25] or discount factors [36], subsequently deriving statistically optimal stopping conditions. However, quantifying the relationship between the improvement of the fitness and the + +cost of BO remains challenging. Furthermore, these criteria do not leverage the information provided by the surrogate model, which is crucial in BO. + +The third category primarily aims to balance exploration and exploitation in the optimization process. Among them, heuristic methods, exemplified by simulated annealing, are widely employed to halt the local search step of optimization algorithms [17, 1, 21]. However, such methods' hyperparameters lack interpretability and must be fine-tuned according to different problem characteristics. Additionally, McLeod et al. propose a regret-based strategy for switching between local and global optimization. Although promising for complex functions, this approach has certain limitations, including reliance on the authors' proposed regret reduction acquisition function and the potential need for additional computational resources to approximate intractable integrals. Furthermore, Eriksson et al. developed a trust-region-based BO that balances exploitation and exploration. This algorithm terminates local search when the trust region size is reduced to zero. However, the termination criteria lack theoretical guarantees and are bound to the proposed trust region maintenance mechanism. + +# 6 Conclusion + +In this paper, we developed a simple yet theoretically grounded two-step method for automatically terminating BO. The key insight is to proactively detect the local convex region and it terminates BO whenever the termination indicator built upon the local regret therein falls below a predetermined threshold. Our proposed termination method naturally strikes a balance between the quality of solution found by BO versus its computational efficiency. The proposed termination method is supported by robust theoretical underpinnings, and we have additionally introduced an approximation method to enhance the numerical stability by solving a bilevel optimization problem. Our extensive empirical studies, conducted across a variety of benchmark problems, including synthetic functions, reinforcement learning, and hyperparameter optimization, consistently demonstrated the better performance of our proposed method compared to other state-of-the-art techniques. + +Besides, experimental results also show that the termination criterion of our proposed method is robust across different problems. This property paves an additional opportunity for our proposed termination method to go beyond automatically terminate BO, but to a broader range of applications, such as early stopping to avoid overfitting in neural network training, change point or anomaly detection in data stream, and even a new perspective to strike the balance between exploitation and exploration under a bandit setting. The primary limitation of the proposed termination criterion is that it requires a predefined termination threshold, which needs to be determined based on prior knowledge or empirical observations. Although a recommended threshold selection range is given here, finding an optimal threshold that suits a wide range of optimization problems remains a challenge. + +# Author Contributions + +SL implemented the theoretical derivations and experiments, as well as drafted the manuscript; KL piloted the idea and re-wrote the manuscript; WL proofread the manuscript. + +# Acknowledgement + +This work was supported in part by the UKRI Future Leaders Fellowship under Grant MR/S017062/1 and MR/X011135/1; NSFC under Grant 62376056 and 62076056; the Royal Society under Grant IES/R2/212077; the Kan Tong Po Fellowship (KTP/R1/231017); the EPSRC under Grant 2404317; the Amazon Research Award and Alan Turing Fellowship; and the National Natural Science Foundation of China under Grant 62273119. + +# References + +[1] Ricardo Baptista and Matthias Poloczek. Bayesian optimization of combinatorial structures. In ICML'18: Proc. of the International Conference on Machine Learning, pages 462-471. PMLR, 2018. +[2] Bruno Betro and Fabio Schoen. Sequential stopping rules for the multistart algorithm in global optimisation. Mathematical Programming, 38(3):271-286, 1987. + +[3] Bruno Betro and Fabio Schoen. Optimal and sub-optimal stopping rules for the multistart algorithm in global optimization. Mathematical Programming, 57(1):445-458, 1992. +[4] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. +[5] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. Retrieved January 20, 2023, from https://github.com/openai/gym. +[6] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on scientific computing, 16(5):1190-1208, 1995. +[7] Souhil Chakar, E Lebarbier, Céline Lévy-Leduc, and Stéphane Robin. A robust approach for estimating change-points in the mean of an $ar(1)$ process. Bernoulli, 23(2):1408-1447, 2017. +[8] Herman Chernoff. Sequential design of experiments. The Annals of Mathematical Statistics, 30(3):755-770, 1959. +[9] Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, Rene Sass, Aaron Klein, Noor Awad, Marius Lindauer, and Frank Hutter. HPOBench: A collection of reproducible multi-fidelity benchmark problems for HPO. In NeurIPS'21: Proc. of the Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. +[10] David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local bayesian optimization. Advances in neural information processing systems, 32, 2019. +[11] Peter I. Frazier. A tutorial on bayesian optimization. CoRR, abs/1807.02811, 2018. +[12] PR Freeman. The secretary problem and its extensions: A review. International Statistical Review/Revue Internationale de Statistique, pages 189-206, 1983. +[13] Roman Garnett. Bayesian Optimization. Cambridge University Press, 2023. +[14] Daniel G Goldstein, R Preston McAfee, Siddharth Suri, and James R Wright. Learning when to stop searching. Management Science, 66(3):1375-1394, 2020. +[15] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In LION'11: Proc. of the Fifth International Conference on Learning and Intelligent Optimization, pages 507-523. Springer, 2011. +[16] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. J. Glob. Optim., 13(4):455-492, 1998. +[17] Scott Kirkpatrick, C Daniel Gelatt Jr, and Mario P Vecchi. Optimization by simulated annealing. science, 220(4598):671-680, 1983. +[18] H. J. Kushner. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng., 86(1):97-106, 1964. +[19] P. Langley. Crafting papers on machine learning. In Pat Langley, editor, ICML'00: Proc. of the 17th International Conference on Machine Learning, pages 1207-1216, Stanford, CA, 2000. Morgan Kaufmann. +[20] Marc Lavielle and Gilles Teyssiere. Adaptive detection of multiple change-points in asset price volatility. In Long memory in economics, pages 129-156. Springer, 2007. +[21] Daniel James Lizotte. Practical bayesian optimization. 2008. +[22] Romy Lorenz, Ricardo P Monti, Ines R Violante, Aldo A Faisal, Christoforos Anagnostopoulos, Robert Leech, and Giovanni Montana. Stopping criteria for boosting automatic experimental design using real-time fmri with Bayesian optimization, 2016. + +[23] Alexandre Lung-Yut-Fong, Céline Lévy-Leduc, and Olivier Cappé. Distributed detection/localization of change-points in high-dimensional network traffic data. Statistics and Computing, 22(2):485-496, 2012. +[24] Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias W. Seeger, and Cedric Archambeau. Automatic termination for hyperparameter optimization. In AutoML'22: Proc. of 2022 International Conference on Automated Machine Learning, volume 188 of Proceedings of Machine Learning Research, pages 7/1-21. PMLR, 2022. +[25] Gustavo Malkomes, Charles Schaff, and Roman Garnett. Bayesian optimization for automated model selection. pages 2892-2900, 2016. +[26] Michael D. McKay, Richard J. Beckman, and William J. Conover. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42(1):55-61, 2000. +[27] Mark McLeod, Stephen Roberts, and Michael A Osborne. Optimization, fast and slow: optimally switching between local and bayesian optimization. In ICML'18: Proc. of the International Conference on Machine Learning, pages 3443-3452. PMLR, 2018. +[28] Vu Nguyen, Sunil Gupta, Santu Rana, Cheng Li, and Svetha Venkatesh. Regret for expected improvement over the best-observed value and stopping condition. In ACML '17: Proc. of The 9th Asian Conference on Machine Learning, volume 77 of Proceedings of Machine Learning Research, pages 279–294. PMLR, 2017. +[29] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proc. IEEE, 104(1):148-175, 2016. +[30] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In NeurIPS'12: Proc. of the Twenty-sixth Conference on Neural Information Processing Systems, pages 2951-2959, 2012. +[31] Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In ICML'10: Proc. of the 27th International Conference on Machine Learning, pages 1015-1022. Omnipress, 2010. +[32] Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory, 58(5):3250-3265, 2012. +[33] S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. Retrieved January 20, 2023, from http://www.sfu.ca/~ssurjano. +[34] Charles Truong, Laurent Oudre, and Nicolas Vayatis. Selective review of offline change point detection methods. Signal Processing, 167:107299, 2020. +[35] Christopher Williams and Carl Edward Rasmussen. Gaussian processes for machine learning. MIT press Cambridge, MA, 2006. +[36] Tianyi Zhang, Daniel Russo, and Assaf Zeevi. Learning to stop with surprisingly few samples. In COLT'21: Proc. of the Conference on Learning Theory, pages 3887-3888. PMLR, 2021. +[37] Shlomo Zilberstein. Using anytime algorithms in intelligent systems. AI magazine, 17(3):73-83, 1996. \ No newline at end of file diff --git a/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/images.zip b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4c02570866384adcb8f630c062d8708acb867e30 --- /dev/null +++ b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5dd50b541f71fc46857d7f5f4745f24a82e5101e508cb0fc8204ed6b6d26314 +size 485112 diff --git a/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/layout.json b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f03bf6cb0c584cc52ec65abd55a6bee2e6af94cb --- /dev/null +++ b/whynotlookingbackwardarobusttwostepmethodtoautomaticallyterminatebayesianoptimization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5c5d27ba5fa6ca5bac73d48367d9d688bba349a6b60b7686f92c81ead5fe847 +size 555518 diff --git a/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_content_list.json b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..eb425352c7a937c9aab5f3d13ca754ba16b5fbbe --- /dev/null +++ b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b51c55abd90b00455b09d55163a1a6b7375dd353f4f7d2c29dca011e6ff05db +size 72581 diff --git a/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_model.json b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..067fe9abf1de2a8a7251289a1823b476adc79eeb --- /dev/null +++ b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43d8ed1552ef7e65fbd8ba03c6fb899996929b9041c71416d7df2935bc94bf04 +size 87291 diff --git a/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_origin.pdf b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2cd44d947b5354eeecc90cf6f2cedf17fd5a86ee --- /dev/null +++ b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/6f8bef3c-7b38-4b28-b2b8-1565385fb36c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73a2c313cae19f60b117b5da0f0f789d46b164802930eb0556835ab9133f6186 +size 1190616 diff --git a/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/full.md b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4ad1c4c18262031e7636909b7f3c96aade0fe23f --- /dev/null +++ b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/full.md @@ -0,0 +1,262 @@ +# xTrimoGene: An Efficient and Scalable Representation Learner for Single-Cell RNA-Seq Data + +Jing Gong $^{1*}$ Minsheng Hao $^{12*}$ Xingyi Cheng $^{1\dagger}$ Xin Zeng $^{1}$ +Chiming Liu $^{1}$ Jianzhu Ma $^{2}$ Xuegong Zhang $^{2}$ Taifeng Wang $^{1}$ Le Song $^{13\dagger}$ $^{1}$ BioMap Research $^{2}$ Tsinghua University + $^{3}$ Mohamed bin Zayed University of Artificial Intelligence + +{gongjing, minsheng_2022, xingyi, zengxin, chiming, taifeng, song1e}@biomap.com, {zhangxg, majianzhu}@tsinghua.edu.cn + +# Abstract + +Advances in high-throughput sequencing technology have led to significant progress in measuring gene expressions at the single-cell level. The amount of publicly available single-cell RNA-seq (scRNA-seq) data is already surpassing 50M records for humans with each record measuring 20,000 genes. This highlights the need for unsupervised representation learning to fully ingest these data, yet classical transformer architectures are prohibitive to train on such data in terms of both computation and memory. To address this challenge, we propose a novel asymmetric encoder-decoder transformer for scRNA-seq data, called xTrimoGene $^{\alpha}$ (or xTrimoGene for short) $^{4}$ , which leverages the sparse characteristic of the data to scale up the pre-training. This scalable design of xTrimoGene reduces FLOPs by one to two orders of magnitude compared to classical transformers while maintaining high accuracy, enabling us to train the largest transformer models over the largest scRNA-seq dataset today. Our experiments also show that the performance of xTrimoGene improves as we scale up the model sizes, and it also leads to SOTA performance over various downstream tasks, such as cell type annotation, perturb-seq effect prediction, and drug combination prediction. xTrimoGene model is now available for use as a service via the following link: https://api.biomap.com/xTrimoGene/apply. + +# 1 Introduction + +Recently, Artificial Intelligence (AI) technology has demonstrated promising results for addressing scientific problems. This AI4Science paradigm witnessed diverse successful biological and pharmaceutical applications, including protein analysis [17, 20, 38, 35, 1], RNA modeling [4], and genomics modulation [27]. However, most existing AI models have predominantly focused on protein sequences, neglecting the growing volume of high-throughput experimental sequencing data in the form of gene expression values. Single-cell RNA sequencing (scRNA-seq) technology has transformed the field of cell biology and enabled us to understand cell-cell, cell-gene and gene-gene relations at the cellular level [16, 3]. This technique captures the expression levels of thousands of genes in parallel, facilitating the study of cellular heterogeneity [5, 19]. This unveiled information is crucial for understanding complex biological systems and disease progression [16, 3]. Integrating and + +modeling such large-scale scRNA-seq data can reveal rich cellular information and benefit various biological task learning. + +Representation learning from scRNA-seq data [9] has been an active area of research in past decades. For example, scVAE [11] and scVI [21] apply a variational autoencoder framework to derive low-dimensional cell embeddings, cscGAN [24] uses a Generative Adversarial Network (GAN) architecture to generate cell-type specific expression profiles, and Saver-X [32] is capable of removing batch effects across datasets. Despite the success of these customized algorithms, they tend to be computationally inefficient and labor-intensive. This prompts us to explore a general-purpose model that first learns underlying knowledge from scRNA-seq data and generalizes it to different tasks in a unified manner. We draw inspiration from the pre-training and fine-tuning paradigm in Natural Language Processing (NLP), which has shown great success in improving various downstream NLP task performance [29, 12, 14]. In light of these findings, we aim to investigate the potential of applying similar approaches to representation learning in scRNA-seq data. + +The first published pre-trained model for single-cell data is scBERT, which uses a low-rank transformer [36] to analyze the scRNA data. It learns the cellular representation by randomly masking a percent of non-zero gene expression values and tries to recover them. scBERT has achieved state-of-the-art results for cell-type annotation tasks. The study shows the potential of a pre-training strategy for single-cell biology research. However, scBERT has certain limitations in fully utilizing scRNA-seq data properties. These limitations include: + +(1) Scalability. The large number of genes (almost 20,000) and the sparsity of scRNA-seq data, with nearly $90\%$ of values being zero, lead to many redundant computations (e.g., self-attention between zero tokens). It required approximately $2.65 \times 10^{19}$ FLOPs to train 5 million samples over 5 epochs, which equals almost 20 days of training on an A100 GPU for only an 8.9 million parameter scBERT model. (2) Limited resolution for expression values. scBERT rounds the gene expression values into integer values, which limits the model's ability to distinguish closeness and similarity between gene expression values. For instance, two close values could be mapped to separate embeddings (e.g., 1.99 and 2.01 are mapped to 1 and 2), and two distant values could be mapped to identical embeddings (e.g., 1.99 and 1.01 are mapped to 1). The strategy leads to a loss of resolution and introduces bias during model training, resulting in sub-optimal performance. + +To address the challenges associated with scRNA-seq data modeling and consider the unique nature of this data (as discussed in Section 2), we present a novel and efficient framework, xTrimoGene, for pre-training large-scale scRNA-seq data. Our framework makes the following key contributions: + +(1) We design an asymmetrical encoder-decoder architecture to guide the pre-training process, which enables us to learn a high-capacity model for single-cell RNA-seq data. Our model achieves an improvement in the speed of pre-training of over 3 times compared to previous encoder-only models. +(2) We illustrate that the efficiency and scalability of our model allow us to train the largest single-cell pre-trained model to date, with approximately 100 million parameters for the xTrimoGene-100M model, using a curated scRNA-seq dataset of approximately 50 billion effective gene tokens. +(3) The pre-trained model xTrimoGene achieved remarkable results in multiple downstream tasks, including cell type annotation, perturbation prediction and synergistic drug combination prediction. + +# 2 Characteristics of Single-Cell RNA-seq Data + +scRNA-seq generates a large, sparse expression matrix, where each row represents a cell (sample) and each column a gene (feature). This dataset presents several challenges and requires a specialized architecture to effectively model the data. + +First, approximately 20,000 genes (columns) are shared across cells. Unlike the corpus in NLP, the genes can be arbitrarily reordered. The relation between genes depends on biological pathways rather than local contexts, where the latter shapes spatial information in Computer Vision (CV) images. Though one can roughly regard each cell (row) as a sentence or an image patch, the 20,000 genes is a vast number compared to the typical sequence length, which is mostly a few hundred and no more than a few thousand [29, 12]. Thus directly applying existing transformer architecture will not work. + +Second, scRNA-seq matrices are highly sparse (90% zero in a typical dataset [15, 7]). The abundance level of RNA for each gene is measured by counting the unique molecular identifier (UMI) reads in + +scRNA-seq experiments [16, 3]. However, many genes exhibit low UMI counts due to limited probing efficiency. Therefore, treating scRNA-seq data as an image and utilizing a convolutional neural network to extract features is not feasible, as it introduces a huge number of redundant computations for sparse positions. + +Third, the normalized gene expression values in scRNA-seq data are continuous scalars, which typically indicate similar gene activity when they have similar values. To transform these scalars into high-dimensional tokens in the data matrix, a representation that preserves the continuous semantics is needed. Manually discretizing the gene expression values is challenging as non-optimal discretization thresholds will bias category assignment. A learned discretization approach or learnable representation, such as the one proposed in [10], is ideal for preserving the continuous semantics of the gene expression values. + +Taking into the above three major features, we design a new architecture as described in the next section. + +![](images/a39c191664734cc2f166c8703e457114275236b796f59f097701d7b4f1ed0d20.jpg) +Figure 1: The xTrimoGene Framework: (1) Random positions (including both zero and nonzero values) are masked for prediction. (2) Masked and zero-valued positions are filtered out. (3) Remaining unmasked positions are aligned with padding tokens (grey) to ensure maximum length consistency within a batch. (4) Gene expression values and gene embeddings are separately projected into embeddings. (5) These two embeddings are element-wise added. (6) The resulting input is fed into the encoder. (7) The intermediate encoder embedding is combined with embeddings for masked positions and zero embeddings. (8) This combined representation is then fed into the decoder. (9) Decoder embedding is projected to model output with an MLP layer. The MSE loss is calculated between the model output and ground truth values for the masked positions. + +# 3 xTrimoGene Architecture + +xTrimoGene is a highly efficient framework for pre-training large-scale single-cell RNA-seq data (illustrated in Figure 1). The training process is based on a regression-masked task, aimed at accurately recovering masked values in the expression matrix. Notably, a specific optimized asymmetrical encoder-decoder framework is employed to accelerate the learning of sparse matrices. This is achieved by feeding only the unmasked non-zero positions (less than $10\%$ of the full length) into the encoder, while the largely masked and zero positions are input into a lightweight decoder with a reduced number of layers and attention heads. In addition, a novel auto-discretization strategy is introduced to project continuous expression values into a latent embedding space. Instead of rounding to the nearest integer, values are directly mapped to the latent space allowing for the representation of closely related values. The xTrimoGene framework consists of the following components: + +Masking: A portion of the normalized gene expression matrix $V$ is masked for prediction, including both zero and non-zero positions. $c$ denotes cell sample size, and $n$ denotes gene number (19,264 in our setting, see App. 1 for data collection and processing). + +Filtering: The masked and zero-valued embeddings are filtered out, yielding a variable-length sequence of valuable information that is prepared for encoding. + +Padding: The remaining unmasked positions are aligned with padding tokens, resulting in a much smaller unmasked-only matrix $V_{\text{unmasked}}$ . $m$ denotes the maximum length of the unmasked sample. We include a scheme to illustrate the processing flow (see App. 2). + +**Embedding:** Expression value and gene embeddings are separately projected. $d$ denotes the dimension of the embedding. The expression embedding is calculated through an auto-discretization mapping. The gene embedding is retrieved from a randomly initialized lookup table. + +Combining Expression and Gene Embeddings: The expression and gene embeddings $(E$ and $G)$ are element-wise added to form the input embedding, which is then fed into the encoder of the model. + +Encoding: The sum of the embeddings is input into the encoder, which implements self-attention mechanisms using a Transformer-like architecture. + +Extending masked and zero embeddings: The intermediate encoder embedding $I_{encoder}$ is combined with embeddings for masked and zero-value positions. + +Decoding: The combined embeddings are processed by the decoder, utilizing self-attention mechanisms instead of the typical casual attention used in NLP decoders. + +Loss Computation: Decoder embedding is projected to model output with an MLP layer. The mean squared error (MSE) loss is computed between the predicted masked values from the model and their corresponding ground truth values. + +# 3.1 Encoder + +The scRNA-seq data is characterized by its high sparsity, with cell information largely concentrated in the non-zero expression values. Thus, the encoder is designed to focus only on the non-zero part of the unmasked matrix, $V_{\text{unmasked}}$ . The encoder is based on a traditional multi-head attention transformer and takes the combination of value embedding, $E$ , and gene embedding, $G$ , as its input, $I \in \mathbb{R}^{c \times m \times d}$ . The value and gene embeddings are similar to the word and positional embeddings in natural language modeling, respectively. The value embedding, $E$ , is generated using the auto-discretization strategy discussed previously, while the gene embedding, $G$ , is retrieved from a function $f_{L}$ that maps the gene symbols into the embedded vocabulary. + +$$ +E = \operatorname {A u t o b i n} \left(V _ {\text {u n m a s k e d}} \odot M _ {\text {n o n z e r o}}\right), G = f _ {L} (\text {g e n e s}), I = E + G \tag {1} +$$ + +Then the encoder processes the input embeddings $I$ and generates the high-level gene representations $I_{encoder} \in \mathbb{R}^{b \times m \times d}$ via the multi-head attention mechanism. + +$$ +I _ {\text {e n c o d e r}} = \operatorname {T r m} \left(f _ {Q} (I), f _ {K} (I), f _ {V} (I)\right) \tag {2} +$$ + +where $f_{Q}, f_{K}, f_{V}$ are the project functions. Trm denotes the Transformer block. + +It is worth emphasizing that our encoder only operates on a subset of genes, reducing the length of the processed sequence to 1/10 of the original. This allows the full-length transformer to be used without any computational approximations. + +# 3.2 Decoder + +Unlike the encoder which focuses on the main information (non-zero expression values) in the cells, the decoder in the system performs full-length feature abstraction and extraction. The input to the decoder, $I_{full}$ , comprises three token types: the output from the encoder, $I_{encoder}$ , the genes with zero expression embs $I_{zero}$ , and the mask token embs $I_{masked}$ . Out of these tokens, genes with zero expression make up 90% of all tokens. The gene embeddings are concatenated with all of these tokens to provide the decoder with gene-specific information for the corresponding mask tokens, followed by a full connection layer. + +$$ +I _ {f u l l} = W _ {p} \left(I _ {\text {e n c o d e r}} \oplus I _ {\text {z e r o}} \oplus I _ {\text {m a s k e d}}\right) + b _ {p} \tag {3} +$$ + +where $\oplus$ represents the concatenation operation, and $W_{p}$ and $b_{p}$ are learnable parameters that project the decoder's embedding size. + +The decoder in the framework is optimized for long-sequence attention calculations and employs the Performer architecture as its backbone. The decoder transforms the input $I_{full}$ into final gene-level embeddings, $I_{decoder} \in \mathbb{R}^{b \times n \times d}$ , and predicts the masked values through a shared linear layer, $W \in \mathbb{R}^{d \times 1}$ , applied across all genes. The operations are expressed as follows: + +$$ +I _ {\text {d e c o d e r}} = \operatorname {T r m} \left(\left(f _ {Q} \left(I _ {\text {f u l l}}\right), f _ {K} \left(I _ {\text {f u l l}}\right), f _ {V} \left(I _ {\text {f u l l}}\right)\right), \tilde {V} = I _ {\text {d e c o d e r}} \cdot W \right. \tag {4} +$$ + +The decoder has a smaller model size compared to the encoder, with a smaller embedding size, fewer attention layers, and fewer attention heads. For instance, in the largest model configuration, the layer depth ratio between the encoder and decoder is 2:1 and the head number ratio is 1.5:1 (see App. Table 2). Similarly, the principle of asymmetric encoder-decoder design has been proven powerful in masked autoencoders (MAE) [13], which is tailored for CV data pre-training. Unlike MAE, xTrimoGene utilizes the biased masking strategy to avoid the learning process being dominated by zero tokens. Though the scRNA-seq data is distinct from images, our results show that the performance gains of xTrimoGene are comparable to those of MAE, with more efficient training and better downstream task performance. + +# 3.3 Auto-discretization strategy + +Our aim is to transform an expression value $v$ into a hidden embedding, denoted as $e$ . The transformation is achieved using an auto-discretization block. This auto-discretization process involves a random look-up table $T$ defined in $\mathbb{R}^{d \times b}$ . In this representation, $d$ refers to the embedding dimension, while $b$ is the number of bins with a default value of 100. The transformation starts by applying a linear layer to the expression value, given by $v_{1} = v \cdot w_{1}$ , where $w_{1}$ represents the weight vector. The resulting $v_{1}$ is then subjected to a leaky ReLU activation, resulting in $v_{2} = \text{Leaky\_ReLU}(v_{1})$ . Subsequently, a cross-layer projection is applied, represented by $v_{3} = w_{2} \cdot v_{2} + \alpha \cdot v_{2}$ . Here, $w_{2}$ denotes the weight vector, and $\alpha$ is a scaling mixture factor. Next, the bin weights of $v_{3}$ are normalized using the softmax function, resulting in $v_{4} = \text{softmax}(v_{3})$ . Finally, the transformed value is represented as a weighted combination of individual embeddings from the look-up table, given by $e = T \cdot v_{4}$ . It's important to note that the weights in this combination serve as learnable parameters. + +To validate the effectiveness of the expression value projection, we conducted an analysis of viewing the weight distribution pattern for continuous values. Our results showed that the normalized weight distribution of the close values exhibited smooth transitions and that of the distant values being clearly distinguishable (App. section 3 Figure 1). This supports the conclusion that the auto-discretization strategy effectively represents continuous values with high resolution while preserving relatively rich meaning. + +We also compared the performance of the proposed auto-discretization strategy with three other discretization methods: (1) Round bin with zero, in which values are rounded to the nearest integer, and zeros are kept as it is, (2) Up bin without zero. Values greater than zero are converted to the nearest ceiling integer, while zero is represented as individual 0. (3) Equal bin. All the values fall into a fixed percentage interval, which is calculated by value distribution and frequency. We evaluated the different strategies on a standard cell clustering task (see App. 4) and found that the proposed auto-discretization strategy outperformed the others (as shown in Figure2 A), demonstrating the importance of high-resolution projections in handling expression values. + +# 4 Training Strategy + +We now explain the strategy used to train the asymmetric encoder-decoder transformer. Pre-trained task and masking strategy are outlined, see App. 5 for acceleration strategy. + +# 4.1 Regression masked task + +The traditional masked language task is a multi-class classification problem, where the predicting target is a single token with limited, naturally distinct categories. In contrast, the normalized gene expression value is a continuous scalar. To fit the data property, we modify the pre-trained learning + +![](images/cc6027effb573b139d532592a35b930d17e46f8db4bc87d9c49a6c0e295363ae.jpg) +Figure 2: Pre-training strategy ablation study. (A) Performance comparison between auto discretization strategy and other binning methods for expression value projection. The cell clustering task is evaluated and five metrics are displayed. ARI for Adjusted Rand index, NMI for Normalized Mutual Information, HOMO for Homogeneity, CP for Completeness and SIL for Silhouette Coefficient. (B) Performance of pre-trained models with different task modes, including regression and classification settings. The cell clustering task is evaluated. See the main text for details. + +![](images/1d7c1f713a71d1c5a0e3b8c073643e2337591ba40ac94adaebdf17e5ff4aa650.jpg) + +objective to a regression task, aimed at recovering the absolute value of the masked positions. The loss function employed is the MSE between the ground truth and the predicted values: + +$$ +\operatorname {L o s s} = \frac {1}{(n - m) * c} \sum \left(V _ {i, j} - \tilde {V} _ {i, j}\right) ^ {2} \tag {5} +$$ + +where $n$ represents the number of all genes, $m$ represents the maximum length of the unmasked positions in a sample, and $c$ represents the number of cells. To evaluate the efficacy of this modification, we compared the regression setting with the classification setting on the cell clustering task. The results indicate that the regression model outperforms the classification model (Figure 2B), providing evidence of the benefits of learning a more fitted representation. + +# 4.2 Masking strategy + +We mask both non-zeros and zeros positions though the scRNA-seq expression matrix is highly sparse (where zero percentage is usually over $90\%$ ). As the zero positions percentage is much higher than non-zero positions, the masked ratio can't be the same for the two types. Otherwise, the model tends to predict all zeros and still obtains a low error level. We propose to mask an almost equal number of positions for zero and non-zeros positions (see App. section 6 Table 1). The setting enforces the model to learn embeddings for all values and not to be dominated by zero representation. We found zero values supervision is necessary to boost the performance (App. Figure 2), which demonstrates that some zeros represent the true extremely low expression level. This type of zeros is informative to illustrate how the gene abundant behaves inside the cell. + +The recovery of masked tokens in NLP is challenging due to the fact that word comprehension relies heavily on long-range interactions rather than local context. Accurate inference of the missing tokens can be achieved at low masking ratios (15%) where the information in the entire sentence is still relatively redundant and encoded by the unmasked tokens. We investigated the density of information needed for the scRNA-seq regression task by training models with different masking ratios (for non-zero values, the ratio was set 10 times higher than for zero values) ranging from 15% to 90% with a 15% interval. The models were then evaluated on the cell clustering task, with the results showing that performance improved first and then degraded as the masking ratio increased. When the masking ratio was close to 30%, the majority of metrics reached a peak (App. Figure 3). We also found current biased masking is optimal (App. Figure 4) and the percentage of [MASK] tokens agrees well with NLP tasks (App. Figure 5). These results suggest that the scRNA-seq expression vector contains more redundant information than a sentence and highlight the role of hidden regulations between genes in constraining the inference of expression values. + +# 5 Experiments + +Next we will explain our experimental settings and results. The dataset description can be referred to App. 1. + +# 5.1 Computational efficiency + +We quantitatively compared the training cost of xTrimoGene with other two encoder-only models, including full-length attention Transformer and kernel-based approximation Performer (scBERT). For an apple-to/apple comparison, three models are set to approximately 10 million trainable parameters and trained on 5 million samples over 5 epochs. We calculated the corresponding FLOPs, where matrix multiplication operations are considered only. We observed that total FLOPs for Performer (scBERT) decreased to $10\%$ of native Transformer (see Table 1). Notably, xTrimoGenes runs 3-times faster than Performer. The results validate the efficiency of xTrimoGene, which is readily adapted for large-scale data pre-training. + +Table 1: Computational efficiency comparison between different algorithms. The resource column is normalized by the Transformer row. + +
Model nameParameter (M)Forward + backward (FLOPs/sample)Total train (FLOPs)Resource
Transformer11.39.86E+122.46E+20100%
Performer8.91.06E+122.65E+1910.8%
xTrimoGene9.83.35E+118.38E+183.4%
+ +# 5.2 Scalability + +The Deep Learning community has shown significant interest in the scalability of proposed models [18, 2]. Vanilla Transformer models are challenging to scale due to their computational time and resource requirements, which increase quadratically with model size. Varieties of attention mechanisms have been proposed to accelerate training speed, a critical factor for model scaling. + +To test the scale-up ability of xTrimoGene, we pre-trained three models across multiple compute regions and scales (e.g., from 3M to 100M parameters). The detailed hyperparameter setting is displayed in the App. Table 2. The training curve clearly shows all models are steadily down to a lower loss when training steps increase (App. Figure 6). More importantly, the xTrimoGene-100M model obtains a significant improvement over the xTrimoGene-10M model, which is also superior to the xTrimoGene-3M model. The tendency is consistent across different data sizes. The results suggest xTrimoGene framework is robust to scale-up, making it possible and convenient to pre-train larger models with more data. + +# 5.3 Robustness on high sparse data + +scRNA-seq data often exhibit varying levels of sparsity, thus it's necessary to assess whether xTrimoGene is robust in handling different sparse data. To verify the robustness, we divided the test samples into subgroups based on cell type and calculated the sparsity level (i.e., percentage of zero values in the expression matrix) and Pearson correlation coefficient between the predicted and actual values. Our results reveal that the correlation gradually decreases as the sparsity level increases, as expected (Figure 3A). However, the correlation remains above 0.8 even when the sparsity level reaches $96\%$ (Figure 3A), indicating the robustness of xTrimoGene. We also compared xTrimoGene's performance with Performer and found that xTrimoGene consistently achieves a higher correlation across most subgroups (Figure 3B). These findings demonstrate that xTrimoGene is robust in handling highly sparse data and outperforms encoder-only architectures. + +The performance of the encoder-decoder and encoder-only architectures have been comparatively analyzed in the NLP domain, with the former demonstrating effectiveness in language comprehension and the latter in context generation. Apart from comparison on masked value recovery, we further evaluated xTrimoGene against encoder-only Performer on the cell clustering task. The results + +![](images/88ba281cdce17df086ef475c2cd17dd55199b68fe71b6aef372e7f1638e2077b.jpg) +Figure 3: Comparison of performance for different sparse level data. (A) xTrimoGene performance for recovering masked values at different sparse levels. Each dot represents a subset defined by cell type. Sparse level is calculated as the ratio between zero value percentages. Pearson correlation coefficient metric is calculated on masked positions. (B) Performance comparison of xTrimoGene and Performer while recovering masked values at different sparse levels. Dot has the same meaning as (A) but the dot size is proportional to the sparse level. Both the x and y axis denotes the Pearson correlation coefficient metric for a particular algorithm. (C) Comparison of performance for xTrimoGene framework and encoder-only framework. Cell clustering task is evaluated. + +![](images/6ce8a2cb3eb50ac8128219982c2f258133c189f8590c8bc937ad6e5608a49d29.jpg) + +![](images/aac21c2f54ac92fa870fb558086ed38ca5af4bff9971101e6ebd8debd8648ef1.jpg) + +demonstrate that xTrimoGene achieves superior performance, reaffirming its proficiency in latent embedding extraction (Figure 3C). + +# 5.4 Evaluation on downstream tasks + +Currently, multiple tasks have been established to evaluate different models, including well-defined cell type annotation and recently developed perturbation response prediction tasks. We first assessed the performance of xTrimoGene on these single-cell tasks. Additionally, we explored the potential application on bulk RNA-sequencing data, with a focus on synergistic drug combination prediction. + +# 5.4.1 Cell type annotation + +First, we evaluated xTrimoGene's performance on cell type annotation task with Zheng68K [39] and Segerstolpe [31] dataset, which has been widely benchmarked. We compared the xTrimoGene against other several methods, including scBERT [36], ACTINN [23], Scany [34], CellTypist [6], scVI [21] and singleCellNet [37]. For the xTrimoGene model, we added a max-pooling layer and a linear layer to predict cell type labels with fine-tuning mode (see App. 8.1). For other methods, we followed their instruction with the default parameter setting. We observed that xTrimoGene achieves a high Precision and F1 score, surpassing all the other methods (Table 2). The results indicated that xTrimoGene learns a well-represented cellular embedding (visualized in App. Figure 7) by simply aggregating contextual gene embedding. + +Table 2: The cell annotation performance on the Zheng68K and Segerstolpe dataset. xTrimoGene is evaluated with 100M parameter model. + +
Method NameZheng68KSegerstolpe
PrecisionF1 scorePrecisionF1 score
xTrimoGene0.7335 ± 0.02260.7354 ± 0.01890.8112 ± 0.00090.8140 ± 0.0008
scBERT0.7029 ± 0.01150.6695 ± 0.00770.6818 ± 0.07360.6703 ± 0.0653
ACTINN0.6720 ± 0.00210.6486 ± 0.00410.7545 ± 0.00180.7219 ± 0.0073
Scanpy0.6111 ± 0.00170.5474 ± 0.00850.6274 ± 0.00000.5398 ± 0.0000
CellTypist0.7454 ± 0.00090.7151 ± 0.00380.7923 ± 0.00030.8117 ± 0.0001
scVI0.4883 ± 0.00050.4843 ± 0.00080.5101 ± 0.00220.5208 ± 0.0016
singleCellNet0.6452 ± 0.00130.5982 ± 0.00270.7551 ± 0.00960.8055 ± 0.0076
+ +# 5.4.2 Perturbation response prediction + +Recently, perturb-seq technology was established to screen gene expression response given pooled perturbations at single-cell level [8]. Several algorithms have also been developed to predict perturbation effects [30, 22] at the single cell level, i.e., what is the expression value of genes after perturbation? We compared the native GEARS[30] model with and without incorporating embeddings from xTrimoGene. + +The normal state (before perturbation) gene expression profile is fed into xTrimoGene and we obtained the context embedding, which replaces raw expression value input in the GEARS model (App. 8.2). All the other settings remain unchanged. The evaluated dataset (Norman et al. [26]) contains both single and double gene perturbation and we thus assess the performance across different perturbation levels. As shown in Figure 4A, GEARS with xTrimoGene embedding scores a lower MSE (decreased $14.8\%$ ) for top20 differential expressed genes across all perturbation scenarios. Notably, the tendency is consistent across different perturbation levels, regardless the perturbed target is seen or not. We also compared against the scBERT embedding and observed a similar trend, where xTrimoGene achieves better results (App. Table 3). The results demonstrated that the pre-training strategy empowers xTrimoGene to capture constraints under various circumstances, including post-perturbations. The application further proved the efficacy and potential of xTrimoGene to boost scRNA-seq based tasks. + +![](images/3c7e102cb6538c990c8354bcebbd32f7a41da98b8fd899c84e9771dead95c3f5.jpg) +Figure 4: (A) The MSE of the top 20 deferentially expressed (DE) genes given by different models on perturbation response prediction. The top 20 DE genes are calculated between the before and post-perturbation expression profiles. "Total" denotes evaluating all test perturbation sets. "1-gene" denotes evaluation on the single gene perturbation subset, where the perturbed target is not seen in the training set. "2-gene" represents the sub-test set for perturbing two genes simultaneously. "seen0", "seen1" and "seen2" denotes zero, one or two perturbed targets are not seen in the training set, respectively. The black line denotes a $95\%$ confidence interval. (B) ROC curve of different models on drug combination synergy prediction task. xTrimoGene denotes replacing the raw expression profile with context embeddings in the DeepDDS framework and others remain unchanged. Refer to App. 8.3 for more details. + +![](images/acaca924e006b86371eedf101958ae5b30795481779683b389b1006142623bfc.jpg) + +# 5.4.3 Synergistic drug combinations prediction + +The drug synergistic task evaluates how patients or cells respond to a drug combination intervention [25]. However, the generated wet-lab experimental data only covers a tiny search space of possible drug combinations. Multiple models have been proposed to accelerate predicting the synergistic landscape of drugs [28, 33]. For instance, DeepDDS integrates genomic expression profiles and drug chemical information, greatly improving the prediction performance. We further explored whether xTrimoGene is able to generate good latent embedding for this bulk expression data. + +Similar to the perturbation prediction test, we adapted xTrimoGene to DeepDDS with the intermediate context embedding (see App. 8.3). We also included DeepSynergy and Random Forest for comparison. As illustrated in Figure 4B, utilizing embedding from the xTrimoGene model outperforms all the other models. The result proved that xTrimoGene can accurately capture cell-level representation, even for bulk sequencing data. This also opens the avenue for xTrimoGene to be applied across other biological modeling tasks, especially where bulk-level transcriptome data is available. + +# 6 Conclusion + +xTrimoGene is a new, efficient framework for learning scRNA-seq data. It proposes an asymmetric encoder-decoder framework that takes advantage of the sparse gene expression matrix and establishes the projection strategy of continuous values with a higher resolution. The results show that xTrimoGene is scalable and performs well on tasks like cell type annotation, perturbation response prediction, and synergistic drug combination prediction. The experiments demonstrate the efficacy of pre-training in single-cell biology. xTrimoGene is potentially adapted to other types of cell modeling analysis, including rare cell detection (App. 8.4), batch effect removal and regulatory network construction. + +Certain limitations exist for xTrimoGene and further work is desired to advance the design. At present, xTrimoGene major utilizes gene expression values during the pre-training stage, overlooking varieties of other related meta-information like sample condition (health/disease), cell type, tissue type, sequencing platform, etc. These rich annotations are biologically meaningful and highly correlated with the expression pattern within a cell. The memory consumption for inference with the xTrimoGene-100M model is approximately 50GB, whose hardware requirement (Nvidia A100 80G GPU) is beyond some academic labs, thus computational or memory-efficient engineering techniques tend to advance the model pre-training and application. + +xTrimoGene has been integrated into BioMap's single-cell analysis platform, functioning as a fundamental and essential model (as depicted in the App. Figure 9). The pre-trained model services have been publicly available. In the future, with the increase of data, larger pre-trained models are expected to drive more advancements in various downstream task learning. + +# Author Contributions and Acknowledgments + +Le Song led the project by designing its scope, conceptualizing ideas, integrating resources, and making decisions on techniques. Xingyi Cheng played a key role in the development of the xTrimoGene framework, auto-discretization and unsupervised objectives, contributing concrete ideas and pseudocode, along with code review. Jing Gong and Mingsheng Hao (Research Intern at BioMap) were primarily responsible for conducting pre-training and downstream experiments, serving as the first authors of the paper. Their work covered areas such as model scaling, cell type annotation, perturbation response prediction, and synergistic drug combinations prediction. Xin Zeng made significant contributions to the code of the xTrimoGene framework, worked with an early performer version, and conducted initial downstream experiments. Chiming Liu oversaw the engineering aspects of the project, including the implementation of the data pipeline and FLOPs computation. Jianzhu Ma, Xuegong Zhang, Taifeng Wang, and Le Song conceived the project and provided invaluable guidance for the project and contributed their expertise in computational biology knowledge. Taifeng Wang also played a pivotal role in pushing for the model's service implementation. Finally, Jing Gong, Mingsheng Hao, Xingyi Cheng, and Le Song collectively contributed to writing this paper. + +In addition, we would like to express our gratitude to the individuals at BioMap who have made contributions to our project. Chenrui Xu and Yucheng Guo played roles in data preprocessing and integration. Zhaoren He's expertise in data analysis and application greatly enhanced our work, and we deeply appreciate his contributions. + +This work was supported by the Ministry of Science and Technology of the People's Republic of China (2022YFF1203004), the Beijing Municipal Science & Technology Commission and the Administrative Commission of Zhongguancun Science Park(Z221100003522022). And this work was also funded by BioMap. + +# References + +[1] Nadav Brandes, Dan Ofer, Yam Peleg, Nadav Rappoport, and Michal Linial. ProteinBERT: a universal deep-learning model of protein sequence and function. Bioinformatics, 38(8):2102-2110, 02 2022. +[2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020. + +[3] Geng Chen, Baitang Ning, and Tieliu Shi. Single-cell rna-seq technologies and related computational data analysis. Frontiers in genetics, page 317, 2019. +[4] Jiayang Chen, Zhihang Hu, Siqi Sun, Qingxiong Tan, Yixuan Wang, Qinze Yu, Licheng Zong, Liang Hong, Jin Xiao, Tao Shen, et al. Interpretablerna foundation model from unannotated data for highly accuraterna structure and function predictions.bioRxiv, pages 2022-08, 2022. +[5] Sijie Chen, Yanting Luo, Haoxiang Gao, Fanhong Li, Yixin Chen, Jiaqi Li, Renke You, Minsheng Hao, Haiyang Bian, Xi Xi, et al. heca: The cell-centric assembly of a cell atlas. Iscience, 25(5):104318, 2022. +[6] C. Dominguez Conde, C. Xu, L. B. Jarvis, D. B. Rainbow, S. B. Wells, T. Gomes, S. K. Howlett, O. Suchanek, K. Polanski, H. W. King, L. Mamanova, N. Huang, P. A. Szabo, L. Richardson, L. Bolt, E. S. Fasouli, K. T. Mahbubani, M. Prete, L. Tuck, N. Richoz, Z. K. Tuong, L. Campos, H. S. Mousa, E. J. Needham, S. Pritchard, T. Li, R. Elmentaite, J. Park, E. Rahmani, D. Chen, D. K. Menon, O. A. Bayraktar, L. K. James, K. B. Meyer, N. Yosef, M. R. Clatworthy, P. A. Sims, D. L. Farber, K. Saeb-Parsy, J. L. Jones, and S. A. Teichmann. Cross-tissue immune cell analysis reveals tissue-specific features in humans. Science, 376(6594):eabl5197, 2022. +[7] Jiarui Ding, Xian Adiconis, Sean K. Simmons, Monika S. Kowalczyk, Cynthia C. Hession, Nemanja D. Marjanovic, Travis K. Hughes, Marc H. Wadsworth, Tyler Burks, Lan T. Nguyen, John Y. H. Kwon, Boaz Barak, William Ge, Amanda J. Kedaigle, Shaina Carroll, Shuqiang Li, Nir Hacohen, Orit Rozenblatt-Rosen, Alex K. Shalek, Alexandra-Chloe Villani, Aviv Regev, and Joshua Z. Levin. Systematic comparison of single-cell and single-nucleus rna-sequencing methods. Nature Biotechnology, 38(6):737-746, Jun 2020. +[8] Atray Dixit, Oren Parnas, Biyu Li, Jenny Chen, Charles P Fulco, Livnat Jerby-Arnon, Nemanja D Marjanovic, Danielle Dionne, Tyler Burks, Raktima Raychowdhury, et al. Perturb-seq: dissecting molecular circuits with scalable single-cell RNA profiling of pooled genetic screens. cell, 167(7):1853-1866, 2016. +[9] Mario Flores, Zhentao Liu, Tinghe Zhang, Md Musaddaqui Hasib, Yu-Chiao Chiu, Zhenqing Ye, Karla Paniagua, Sumin Jo, Jianqiu Zhang, Shou-Jiang Gao, et al. Deep learning tackles single-cell analysis—a survey of deep learning for scrna-seq analysis. Briefings in bioinformatics, 23(1):bbab531, 2022. +[10] Yury Gorishniy, Ivan Rubachev, and Artem Babenko. On embeddings for numerical features in tabular deep learning. Advances in Neural Information Processing Systems, 35:24991-25004, 2022. +[11] Christopher Heje Gronbech, Maximillian Fornitz Vording, Pascal N Timshel, Casper Kaae Sønderby, Tune H Pers, and Ole Winther. scVAE: variational auto-encoders for single-cell gene expression data. Bioinformatics, 36(16):4415–4422, 05 2020. +[12] Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. Pre-trained models: Past, present and future. AI Open, 2:225-250, 2021. +[13] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000-16009, 2022. +[14] Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2712-2721. PMLR, 09-15 Jun 2019. +[15] Ruochen Jiang, Tianyi Sun, Dongyuan Song, and Jingyi Jessica Li. Statistics or biology: the zero-inflation controversy about scrna-seq data. Genome Biology, 23(1):31, Jan 2022. +[16] Dragomirka Jovic, Xue Liang, Hua Zeng, Lin Lin, Fengping Xu, and Yonglun Luo. Single-cell RNA sequencing technologies and applications: A brief overview. Clinical and Translational Medicine, 12(3):e694, 2022. + +[17] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. +[18] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. +[19] Mengwei Li, Xiaomeng Zhang, Kok Siong Ang, Jingjing Ling, Raman Sethi, Nicole Yee Shin Lee, Florent Ginghoux, and Jinmiao Chen. Disco: a database of deeply integrated human single-cell omics data. Nucleic acids research, 50(D1):D596-D602, 2022. +[20] Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Nikita Smetanin, Robert Verkuil, Ori Kabeli, Yaniv Shmueli, et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science, 379(6637):1123-1130, 2023. +[21] Romain Lopez, Jeffrey Regier, Michael B Cole, Michael I Jordan, and Nir Yosef. Deep generative modeling for single-cell transcriptomics. Nature methods, 15(12):1053-1058, 2018. +[22] Mohammad Lotfollahi, Anna Klimovskaia Susmelj, Carlo De Donno, Yuge Ji, Ignacio L Ibarra, F Alexander Wolf, Nafissa Yakubova, Fabian J Theis, and David Lopez-Paz. Compositional perturbation autoencoder for single-cell response modeling. BioRxiv, 2021. +[23] Feiyang Ma and Matteo Pellegrini. Actinn: automated identification of cell types in single cell RNA sequencing. Bioinformatics, 36(2):533-538, 2020. +[24] Mohamed Marouf, Pierre Machart, Vikas Bansal, Christoph Kilian, Daniel S. Magruder, and Stefan Bonn Christian F. Krebs. Realistic in silico generation and augmentation of single-cellrna-seq data using generative adversarial networks. Nature Communications, 11(1)(166):786- 793, 2020. +[25] Reza Bayat Mokhtari, Tina S Homayouni, Narges Baluch, Evgeniya Morgatskaya, Sushil Kumar, Bikul Das, and Herman Yeger. Combination therapy in combating cancer. Oncotarget, 8(23):38022, 2017. +[26] Thomas M. Norman, Max A. Horlbeck, Joseph M. Repogle, Alex Y. Ge, Albert Xu, Marco Jost, Luke A. Gilbert, and Jonathan S. Weissman. Exploring genetic interaction manifolds constructed from rich single-cell phenotypes. Science, 365(6455):786-793, 2019. +[27] Ahmad Pesaranghader, Stan Matwin, Marina Sokolova, Jean-Christophe Grenier, Robert G Beiko, and Julie Hussin. deepSimDEF: deep neural embeddings of gene products and gene ontology terms for functional analysis of genes. Bioinformatics, 38(11):3051–3061, 05 2022. +[28] Kristina Preuer, Richard P I Lewis, Sepp Hochreiter, Andreas Bender, Krishna C Bulusu, and Günter Klambauer. DeepSynergy: predicting anti-cancer drug synergy with Deep Learning. Bioinformatics, 34(9):1538-1546, 12 2017. +[29] Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872-1897, 2020. +[30] Yusuf Roohani, Kexin Huang, and Jure Leskovec. Gears: Predicting transcriptional outcomes of novel multi-gene perturbations. BioRxiv, pages 2022-07, 2022. +[31] Åsa Segerstolpe, Athanasia Palasantza, Pernilla Eliasson, Eva-Marie Andersson, Anne-Christine Andreasson, Xiaoyan Sun, Simone Picelli, Alan Sabirsh, Maryam Clausen, Magnus K Bjursell, David M Smith, Maria Kasper, Carina Åmmälä, and Rickard Sandberg. Single-cell transcriptome profiling of human pancreatic islets in health and type 2 diabetes. Cell Metab., 24(4):593-607, October 2016. +[32] Jingshu Wang, Divyansh Agarwal, Mo Huang, Gang Hu, Zilu Zhou, Chengzhong Ye, and Nancy R Zhang. Data denoising with transfer learning in single-cell transcriptomics. Nature methods, 16(9):875-878, 2019. + +[33] Jinxian Wang, Xuejun Liu, Siyuan Shen, Lei Deng, and Hui Liu. Deepdds: deep graph neural network with attention mechanism to predict synergistic drug combinations. *Briefings in Bioinformatics*, 23(1):bbab390, 2022. +[34] F Alexander Wolf, Philipp Angerer, and Fabian J Theis. Scanpy: large-scale single-cell gene expression data analysis. Genome biology, 19:1-5, 2018. +[35] Yijia Xiao, Jiezhong Qiu, Ziang Li, Chang-Yu Hsieh, and Jie Tang. Modeling protein using large-scale pretrain language model. arXiv preprint arXiv:2108.07435, 2021. +[36] Fan Yang, Wenchuan Wang, Fang Wang, Yuan Fang, Duyu Tang, Junzhou Huang, Hui Lu, and Jianhua Yao. scbert as a large-scale pretrained deep language model for cell type annotation of single-cell rna-seq data. Nature Machine Intelligence, 4(10):852-866, 2022. +[37] Patrick Cahan Yuqi Tan. Singlecellnet: A computational tool to classify single cell rna-seq data across platforms and across species. Cell Systems, 19(2):207-213, 2019. +[38] Ningyu Zhang, Zhen Bi, Xiaozhuan Liang, Siyuan Cheng, Haosen Hong, Shumin Deng, Jiazhang Lian, Qiang Zhang, and Huajun Chen. Ontoprotein: Protein pretraining with gene ontology embedding. arXiv preprint arXiv:2201.11147, 2022. +[39] Grace XY Zheng, Jessica M Terry, Phillip Belgrader, Paul Ryvkin, Zachary W Bent, Ryan Wilson, Solongo B Ziraldo, Tobias D Wheeler, Geoff P McDermott, Junjie Zhu, et al. Massively parallel digital transcriptional profiling of single cells. Nature communications, 8(1):14049, 2017. \ No newline at end of file diff --git a/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/images.zip b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5f306b30c00f748fd27409c8a201369ed0b89670 --- /dev/null +++ b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5c8f73fd854438e1906f548e008cd73fb517ed7d7d0caebd735cc7339922d9b +size 305540 diff --git a/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/layout.json b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c76e7dc8ea63ac6340f869eecaf2f378b4e8ac76 --- /dev/null +++ b/xtrimogeneanefficientandscalablerepresentationlearnerforsinglecellrnaseqdata/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:148094e26b0d7ca3ad7a5b854381e60c5d9564357da00a60ca7cdce8bfdc3a55 +size 318748 diff --git a/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_content_list.json b/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0a22d886d0385d01c8674746a6615e16b8e589c4 --- /dev/null +++ b/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a26d64dad5fdd51c595dc15c62c6df8905065a78c29a9d45702ff154c097d3c0 +size 87407 diff --git a/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_model.json b/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8d79089740773ec522cbacb0448dbe77a8d6901c --- /dev/null +++ b/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9c7f77f440defa17742ea4c5dfe6ffcbc012fb1f5d0bcc855aaf606f75799e3 +size 108620 diff --git a/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_origin.pdf b/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3e8600845136b7a9bdc36a0a219c08d91caf30d8 --- /dev/null +++ b/youonlycondenseoncetworulesforpruningcondenseddatasets/dbfde00c-7c4b-48aa-b68d-8b0f07b73866_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0914320602b6aebe6d7cda87d51b70aebe8463426627bc5a18b42accfbb2d6f8 +size 2498292 diff --git a/youonlycondenseoncetworulesforpruningcondenseddatasets/full.md b/youonlycondenseoncetworulesforpruningcondenseddatasets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b08350ce83d7c47e1049747675e40ca5e6fba5be --- /dev/null +++ b/youonlycondenseoncetworulesforpruningcondenseddatasets/full.md @@ -0,0 +1,387 @@ +# You Only Condense Once: Two Rules for Pruning Condensed Datasets + +Yang He, Lingao Xiao, Joey Tianyi Zhou + +CFAR, Agency for Science, Technology and Research, Singapore + +IHPC, Agency for Science, Technology and Research, Singapore + +{He_Yang, Joey_Zhou}@cfar.a-star.edu.sg + +# Abstract + +Dataset condensation is a crucial tool for enhancing training efficiency by reducing the size of the training dataset, particularly in on-device scenarios. However, these scenarios have two significant challenges: 1) the varying computational resources available on the devices require a dataset size different from the pre-defined condensed dataset, and 2) the limited computational resources often preclude the possibility of conducting additional condensation processes. We introduce You Only Condense Once (YOCO) to overcome these limitations. On top of one condensed dataset, YOCO produces smaller condensed datasets with two embarrassingly simple dataset pruning rules: Low LBPE Score and Balanced Construction. YOCO offers two key advantages: 1) it can flexibly resize the dataset to fit varying computational constraints, and 2) it eliminates the need for extra condensation processes, which can be computationally prohibitive. Experiments validate our findings on networks including ConvNet, ResNet and DenseNet, and datasets including CIFAR-10, CIFAR-100 and ImageNet. For example, our YOCO surpassed various dataset condensation and dataset pruning methods on CIFAR-10 with ten Images Per Class (IPC), achieving $6.98 - 8.89\%$ and $6.31 - 23.92\%$ accuracy gains, respectively. The code is available at: https://github.com/he-y/you-only-condense-once. + +# 1 Introduction + +Deep learning models often require vast amounts of data to achieve optimal performance. This data-hungry nature of deep learning algorithms, coupled with the growing size and complexity of datasets, has led to the need for more efficient dataset handling techniques. Dataset condensation is a promising approach that enables models to learn from a smaller and more representative subset of the entire dataset. Condensed datasets are especially utilized in on-device scenarios, where limited computational resources and storage constraints necessitate the use of a compact training set. + +However, these on-device scenarios have two significant constraints. First, the diverse and fluctuating computational resources inherent in these scenarios necessitate a level of flexibility in the size of the dataset. But the requirement of flexibility is not accommodated by the fixed sizes of previous condensed datasets. Second, the limited computational capacity in these devices also makes extra condensation processes impractical, if not impossible. Therefore, the need for adaptability in the size of the condensed dataset becomes increasingly crucial. Furthermore, this adaptability needs to be realized without introducing another computationally intensive condensation process. + +We introduce You Only Condense Once (YOCO) to enable the flexible resizing (pruning) of condensed datasets to fit varying on-device scenarios without extra condensation process (See Fig. 1). The first rule of our proposed method involves a metric to evaluate the importance of training samples in the context of dataset condensation. + +From the gradient of the loss function, we develop the Logit-Based Prediction Error (LBPE) score to rank training samples. This metric quantifies the neural network's difficulty in recognizing each sample. Specifically, training samples with low LBPE scores are considered easy as they indicate that the model's prediction is close to the true label. These samples exhibit simpler patterns, easily captured by the model. Given the condensed datasets' small size, prioritizing easier samples with low LBPE scores is crucial to avoid overfitting. + +A further concern is that relying solely on a metric-based ranking could result in an imbalanced distribution of classes within the dataset. Imbalanced + +![](images/0d4a3b72f83dcab9e78c42c2f595f5038f21ac0d2847027897a65f49bab5d40c.jpg) +Figure 1: Previous methods (left) require extra condensation processes, but ours (right) do not. + +datasets can lead to biased predictions, as models tend to focus on the majority class, ignoring the underrepresented minority classes. This issue has not been given adequate attention in prior research about dataset pruning [3, 41, 34], but it is particularly important when dealing with condensed datasets. In order to delve deeper into the effects of class imbalance, we explore Rademacher Complexity [31], a widely recognized metric for model complexity that is intimately connected to generalization error and expected loss. Based on the analysis, we propose Balanced Construction to ensure that the condensed dataset is both informative and balanced. + +The key contributions of our work are: 1) To the best of our knowledge, it's the first work to provide a solution to adaptively adjust the size of a condensed dataset to fit varying computational constraints. 2) After analyzing the gradient of the loss function, we propose the LBPE score to evaluate the sample importance and find out easy samples with low LBPE scores are suitable for condensed datasets. 3) Our analysis of the Rademacher Complexity highlights the challenges of class imbalance in condensed datasets, leading us to construct balanced datasets. + +# 2 Related Works + +# 2.1 Dataset Condensation/Distillation + +Dataset condensation, or distillation, synthesizes a compact image set to maintain the original dataset's information. Wang et al. [45] pioneer an approach that leverages gradient-based hyperparameter optimization to model network parameters as a function of this synthetic data. Building on this, Zhao et al. [53] introduce gradient matching between real and synthetic image-trained models. Kim et al. [19] further extend this, splitting synthetic data into $n$ factor segments, each decoding into $n^2$ training images. Zhao & Bilen [50] apply consistent differentiable augmentation to real and synthetic data, thus enhancing information distillation. Cazenavette et al. [1] propose to emulate the long-range training dynamics of real data by aligning learning trajectories. Liu et al. [24] advocate matching only representative real dataset images, selected based on latent space cluster centroid distances. Additional research avenues include integrating a contrastive signal [21], matching distribution or features [52, 44], matching multi-level gradients [16], minimizing accumulated trajectory error [7], aligning loss curvature [39], parameterizing datasets [6, 23, 51, 43], and optimizing dataset condensation [27, 32, 33, 42, 55, 25, 49, 4, 26]. Nevertheless, the fixed size of condensed datasets remains an unaddressed constraint in prior work. + +# 2.2 Dataset Pruning + +Unlike dataset condensation that alters image pixels, dataset pruning preserves the original data by selecting a representative subset. Entropy [3] keeps hard samples with maximum entropy (uncertainty [22, 38]), using a smaller proxy network. Forgetting [41] defines forgetting events as an accuracy drop at consecutive epochs, and hard samples with the most forgetting events are important. AUM [35] identifies data by computing the Area Under the Margin, the difference between the true label logits and the largest other logits. Memorization [11] prioritizes a sample if it significantly improves the probability of correctly predicting the true label. GraNd [34] and EL2N [34] classify samples as hard based on the presence of large gradient norm and large norm of error vectors, respectively. CCS [54] extends previous methods by pruning hard samples and using stratified sampling to + +achieve good coverage of data distributions at a large pruning ratio. Moderate [47] selects moderate samples (neither hard nor easy) that are close to the score median in the feature space. Optimization-based [48] chooses samples yielding a strictly constrained generalization gap. In addition, other dataset pruning (or coreset selection) methods [8, 28, 36, 17, 30, 18, 2, 29, 9, 10, 46, 15] are widely used for Active Learning [38, 37]. However, the previous methods consider full datasets and often neglect condensed datasets. + +# 3 Method + +# 3.1 Preliminaries + +We denote a dataset by $S = (x_{i},y_{i})_{i = 1}^{N}$ , where $x_{i}$ represents the $i^{th}$ input and $y_{i}$ represents the corresponding true label. Let $\mathcal{L}(p(\mathbf{w},x),y)$ be a loss function that measures the discrepancy between the predicted output $p(\mathbf{w},x)$ and the true label $y$ . The loss function is parameterized by a weight vector $\mathbf{w}$ , which we optimize during the training process. + +We consider a time-indexed sequence of weight vectors, $\mathbf{w}_t$ , where $t = 1,\dots ,T$ . This sequence represents the evolution of the weights during the training process. The gradient of the loss function with respect to the weights at time $t$ is given by $g_{t}(x,y) = \nabla_{\mathbf{w}_{t}}\mathcal{L}(p(\mathbf{w}_{t},x),y)$ . + +# 3.2 Identifying Important Training Samples + +Our goal is to propose a measure that quantifies the importance of a training sample. We begin by analyzing the gradient of the loss function with respect to the weights $\mathbf{w}_t$ : + +$$ +\nabla_ {\mathbf {w} _ {t}} \mathcal {L} (p (\mathbf {w} _ {t}, x), y) = \frac {\partial \mathcal {L} (p (\mathbf {w} _ {t} , x) , y)}{\partial p (\mathbf {w} _ {t} , x)} \cdot \frac {\partial p (\mathbf {w} _ {t} , x)}{\partial \mathbf {w} _ {t}}. \tag {1} +$$ + +We aim to determine the impact of training samples on the gradient of the loss function, as the gradient plays a critical role in the training process of gradient-based optimization methods. Note that our ranking method is inspired by EL2N [34], but we interpret it in a different way by explicitly considering the dataset size $|S|$ . + +Definition 1 (Logit-Based Prediction Error - LBPE): The Logit-Based Prediction Error (LBPE) of a training sample $(x,y)$ at time $t$ is given by: + +$$ +\mathrm {L B P E} _ {t} (x, y) = \mathbb {E} | p (\mathbf {w} _ {t}, x) - y | _ {2}, \tag {2} +$$ + +where $\mathbf{w}_t$ is the weights at time $t$ , and $p(\mathbf{w}_t, x)$ represents the prediction logits. + +Lemma 1 (Gradient and Importance of Training Samples): The gradient of the loss function $\nabla_{\mathbf{w}_t}\mathcal{L}(p(\mathbf{w}_t,x),y)$ for a dataset $S$ is influenced by the samples with prediction errors. + +Proof of Lemma 1: Consider two datasets $S$ and $S_{\neg j}$ , where $S_{\neg j}$ is obtained by removing the sample $(x_j, y_j)$ from $S$ . Let the gradients of the loss function for these two datasets be $\nabla_{\mathbf{w}_t}^S \mathcal{L}$ and $\nabla_{\mathbf{w}_t}^{S \neg j} \mathcal{L}$ , respectively. The difference between the gradients is given by (see Appendix A.1 for proof): + +$$ +\Delta \nabla_ {\mathbf {w} _ {t}} \mathcal {L} = \frac {- 1}{| S | (| S | - 1)} \sum_ {(x, y) \in S _ {\neg j}} \frac {\partial \mathcal {L} (p \left(\mathbf {w} _ {t} , x\right) , y)}{\partial p \left(\mathbf {w} _ {t} , x\right)} \cdot \frac {\partial p \left(\mathbf {w} _ {t} , x\right)}{\partial \mathbf {w} _ {t}} + \frac {1}{| S |} \frac {\partial \mathcal {L} (p \left(\mathbf {w} _ {t} , x _ {j}\right) , y _ {j})}{\partial p \left(\mathbf {w} _ {t} , x _ {j}\right)} \cdot \frac {\partial p \left(\mathbf {w} _ {t} , x _ {j}\right)}{\partial \mathbf {w} _ {t}} \tag {3} +$$ + +Let us denote the error term as: $e_j = p(\mathbf{w}_t, x_j) - y_j$ , the LBPE score for sample $(x_j, y_j)$ is given by $\mathrm{LBPE}_t(x_j, y_j) = \mathbb{E}|e_j|_2$ , and the difference in gradients related to the sample $(x_j, y_j)$ can be rewritten as: + +$$ +\frac {1}{| S |} \frac {\partial \mathcal {L} \left(e _ {j}\right)}{\partial e _ {j}} \cdot \frac {\partial p \left(\mathbf {w} _ {t} , x _ {j}\right)}{\partial \mathbf {w} _ {t}}. \tag {4} +$$ + +If the sample $(x_{j},y_{j})$ has a lower LBPE score, it implies that the error term $e_j$ is smaller. Let's consider the mean squared error (MSE) loss function, which is convex. The MSE loss function is + +Algorithm 1 Compute LBPE score for samples over epochs +Require: Training dataset $S$ and its size $|S|$ , weights $\mathbf{w}_t$ , true labels $y$ , model's predicted probabilities $p(\mathbf{w}_t, x)$ , number of epochs $E$ , Epochs with Top-K accuracy +1: Initialize matrix: LBPE = torch.zeros((E, |S|)) ▷ LBPE scores over samples and epochs +2: Initialize accuracy: ACC = torch.zeros(E) ▷ Track the accuracy over epochs +3: for each epoch $t$ in range $E$ do ▷ Loop through each epoch +4: for each sample index $i$ in $S$ do ▷ Loop through each sample in the dataset +5: Compute error term for sample $i$ at epoch $t$ : $e_{i,t} = p(\mathbf{w}_t, x_i) - y_i$ +6: Compute LBPE score for sample $i$ at epoch $t$ with MSE loss: $\mathrm{LBPE}_{i,t} = \mathbb{E}_{MSE}|e_{i,t}|_2$ +7: end for +8: Compute accuracy at epoch $t$ : $\mathrm{ACC}_t$ +9: end for +10: Top_K ← argsort(ACC)[-k:] ▷ Find the epochs with the Top-K accuracy +11: AVG_LBPE ← mean(LBPE[Top_k,:]) ▷ Average LBPE score over Top-K epochs +12: return AVG_LBPE + +defined as $\mathcal{L}(e_j) = \frac{1}{2} (e_j)^2$ . Consequently, the derivative of the loss function $\frac{\partial\mathcal{L}(e_j)}{\partial e_j} = e_j$ would be smaller for samples with smaller LBPE scores, leading to a smaller change in the gradient $\Delta \nabla_{\mathbf{w}_t}\mathcal{L}$ . + +Rule 1: For a small dataset, a sample with a lower LBPE score will be more important. Let $S$ be a dataset of size $|S|$ , partitioned into subsets $S_{easy}$ (lower LBPE scores) and $S_{hard}$ (higher LBPE scores). + +Case 1: Small Dataset - When the dataset size $|S|$ is small, the model's capacity to learn complex representations is limited. Samples in $S_{easy}$ represent prevalent patterns in the data, and focusing on learning from them leads to a lower average expected loss. This enables the model to effectively capture the dominant patterns within the limited dataset size. Moreover, the gradients of the loss function for samples in $S_{easy}$ are smaller, leading to faster convergence and improved model performance within a limited number of training iterations. + +Case 2: Large Dataset - When the dataset size $|S|$ is large, the model has the capacity to learn complex representations, allowing it to generalize well to both easy and hard samples. As the model learns from samples in both $S_{easy}$ and $S_{hard}$ , its overall performance improves, and it achieves higher accuracy on hard samples. Training on samples in $S_{hard}$ helps the model learn more discriminative features, as they often lie close to the decision boundary. + +Therefore, in the case of a small dataset, samples with lower LBPE scores are more important. + +The use of the LBPE importance metric is outlined in Algorithm 1. LBPE scores over the epochs with the Top-K training accuracy are averaged. The output of this algorithm is the average LBPE score. + +# 3.3 Balanced Construction + +In this section, we prove that a more balanced class distribution yields a lower expected loss. + +Definition 2.1 (Dataset Selection $S_A$ and $S_B$ ): $S_A$ is to select images from each class based on their LBPE scores such that the selection is balanced across classes, and $S_B$ is to select images purely based on their LBPE scores without considering the class balance. Formally, we have: + +$$ +S _ {A} = \left(x _ {i}, y _ {i}\right): x _ {i} \in \mathcal {X} _ {k}, \text {a n d} \mathrm {L B P E} _ {t} \left(x _ {i}, y _ {i}\right) \leq \tau_ {k}, \quad S _ {B} = \left(x _ {i}, y _ {i}\right): \mathrm {L B P E} _ {t} \left(x _ {i}, y _ {i}\right) \leq \tau \tag {5} +$$ + +where $\mathcal{X}_k$ denotes the set of images from class $k$ , and $\tau_k$ is a threshold for class $k$ , $\tau$ is a global threshold. Then $S_A$ is a more balanced dataset compared to $S_B$ . + +Definition 2.2 (Generalization Error): The generalization error of a model is the difference between the expected loss on the training dataset and the expected loss on an unseen test dataset: + +$$ +\operatorname {G e n E r r} (\mathbf {w}) = \mathbb {E} \left[ L _ {\text {t e s t}} (\mathbf {w}) \right] - \mathbb {E} \left[ L _ {\text {t r a i n}} (\mathbf {w}) \right]. \tag {6} +$$ + +Definition 2.3 (Rademacher Complexity): The Rademacher complexity [31] of a hypothesis class $\mathcal{H}$ for a dataset $S$ of size $N$ is defined as: + +$$ +\mathcal {R} _ {N} (\mathcal {H}) = \underset {\sigma} {\mathbb {E}} \left[ \sup _ {h \in \mathcal {H}} \frac {1}{N} \sum_ {i = 1} ^ {N} \sigma_ {i} h \left(\mathbf {x} _ {i}\right) \right], \tag {7} +$$ + +# Algorithm 2 Balanced Dataset Construction + +Require: Condensed dataset $S = (x_{i},y_{i})_{i = 1}^{m}$ with classes K, LBPE scores LBPE, class-specific thresholds $\pmb{\tau} = \tau_{k}\pmb{\tau}_{k = 1}^{K}$ to ensure an equal number of samples for each class + +1: Initialize $S_A = \emptyset$ ▷ Initialize the balanced subset as an empty set +2: for each class $k \in \mathbf{K}$ do +3: $I_{\text{sel}} \gets \{i : y_i = k \text{ and } \mathbf{LBP} \mathbf{E}_t(x_i, y_i) \leq \tau_k\}$ ▷ Find indices of samples for class $k$ +4: $S_A\gets S_A\cup (x_i,y_i):i\in I_{sel}$ Add the selected samples to the balanced subset +5: end for +6: return $S_A$ $\triangleright$ Return the balanced subset + +where $\sigma_{i}$ are independent Rademacher random variables taking values in $-1, 1$ with equal probability. + +Lemma 2.1 (Generalization Error Bound): With a high probability, the generalization error is upper-bounded by the Rademacher complexity of the hypothesis class: + +$$ +\operatorname {G e n E r r} (\mathbf {w}) \leq 2 \mathcal {R} _ {N} (\mathcal {H}) + \mathcal {O} \left(\frac {1}{\sqrt {N}}\right), \tag {8} +$$ + +where $\mathcal{O}$ represents the order of the term. + +Lemma 2.2 (Rademacher Complexity Comparison): The Rademacher complexity of dataset $S_A$ is less than that of dataset $S_B$ : + +$$ +\mathcal {R} _ {N _ {A}} (\mathcal {H}) \leq \mathcal {R} _ {N _ {B}} (\mathcal {H}). \tag {9} +$$ + +Theorem 2.1: The expected loss for the dataset $S_A$ is less than or equal to $S_B$ when both models achieve similar performance on their respective training sets. + +Proof of Theorem 2.1: Using Lemma 2.1 and Lemma 2.2, we have: + +$$ +\operatorname {G e n E r r} \left(\mathbf {w} _ {A}\right) \leq \operatorname {G e n E r r} \left(\mathbf {w} _ {B}\right). \tag {10} +$$ + +Assuming that both models achieve similar performance on their respective training sets, the training losses are approximately equal: + +$$ +\mathbb {E} \left[ L _ {\text {t r a i n}} \left(\mathbf {w} _ {A}\right) \right] \approx \mathbb {E} \left[ L _ {\text {t r a i n}} \left(\mathbf {w} _ {B}\right) \right]. \tag {11} +$$ + +Given this assumption, we can rewrite the generalization error inequality as: + +$$ +\mathbb {E} \left[ L _ {\text {t e s t}} (\mathbf {w} _ {A}) \right] - \mathbb {E} \left[ L _ {\text {t r a i n}} (\mathbf {w} _ {A}) \right] \leq \mathbb {E} \left[ L _ {\text {t e s t}} (\mathbf {w} _ {B}) \right] - \mathbb {E} \left[ L _ {\text {t r a i n}} (\mathbf {w} _ {B}) \right]. \tag {12} +$$ + +Adding $\mathbb{E}[L_{\mathrm{train}}(\mathbf{w}_A)]$ to both sides, we get: + +$$ +\mathbb {E} \left[ L _ {\text {t e s t}} \left(\mathbf {w} _ {A}\right) \right] \leq \mathbb {E} \left[ L _ {\text {t e s t}} \left(\mathbf {w} _ {B}\right) \right]. \tag {13} +$$ + +This result indicates that the balanced dataset $S_A$ is better than $S_B$ . + +Theorem 2.2: Let $S_F$ and $S_C$ be the full and condensed datasets, respectively, and let both $S_F$ and $S_C$ have an imbalanced class distribution with the same degree of imbalance. Then, the influence of the imbalanced class distribution on the expected loss is larger for the condensed dataset $S_C$ than for the full dataset $S_F$ . + +Proof of Theorem 2.2: We compare the expected loss for the full and condensed datasets, taking into account their class imbalances. Let $L(h)$ denote the loss function for the hypothesis $h$ . Let $\mathbb{E}[L(h)|S]$ denote the expected loss for the hypothesis $h$ on the dataset $S$ . Let $n_{kF}$ and $n_{kC}$ denote the number of samples in class $k$ for datasets $S_F$ and $S_C$ , respectively. Let $m_F$ and $m_C$ denote the total number of samples in datasets $S_F$ and $S_C$ , respectively. Let $r_k = \frac{n_{kF}}{m_F} = \frac{n_{kC}}{m_C}$ be the class ratio for each class $k$ in both datasets. The expected loss for $S_F$ and $S_C$ can be written as: + +$$ +\mathbb {E} [ L (h) | S _ {F} ] = \sum_ {k = 1} ^ {K} r _ {k} \mathbb {E} [ l (h (x), y) | \mathcal {X} _ {k} ], \quad \mathbb {E} [ L (h) | S _ {C} ] = \sum_ {k = 1} ^ {K} r _ {k} \mathbb {E} [ l (h (x), y) | \mathcal {X} _ {k} ], \tag {14} +$$ + +To show this, let's compare the expected loss per sample in each dataset: + +$$ +\frac {\mathbb {E} [ L (h) \mid S _ {C} ]}{m _ {C}} > \frac {\mathbb {E} [ L (h) \mid S _ {F} ]}{m _ {F}}. \tag {15} +$$ + +This implies that the influence of the imbalanced class distribution is larger for $S_{C}$ than for $S_{F}$ . + +Rule 2: Balanced class distribution should be utilized for the condensed dataset. The construction of a balanced class distribution based on LBPE scores is outlined in Algorithm 2. Its objective is to create an equal number of samples for each class to ensure a balanced dataset. + +# 4 Experiments + +# 4.1 Experiment Settings + +IPC stands for "Images Per Class". $\mathrm{IPC}_{\mathbf{F}\rightarrow \mathbf{T}}$ means flexibly resize the condensed dataset from size $\mathbf{F}$ to size T. More detailed settings can be found in Appendix B.1. + +Dataset Condensation Settings. The CIFAR-10 and CIFAR-100 datasets [20] are condensed via ConvNet-D3 [12], and ImageNet-10 [5] via ResNet10-AP [13], both following IDC [19]. IPC includes 10, 20, or 50, depending on the experiment. For both networks, the learning rate is 0.01 with 0.9 momentum and 0.0005 weight decay. The SGD optimizer and a multi-step learning rate scheduler are used. The training batch size is 64, and the network is trained for $2000 \times 100$ epochs for CIFAR-10/CIFAR-100 and $500 \times 100$ epochs for ImageNet-10. + +YOCO Settings. 1) LBPE score selection. To reduce computational costs, we derive the LBPE score from training dynamics of early $E$ epochs. To reduce variance, we use the LBPE score from the top- $K$ training epochs with the highest accuracy. For CIFAR-10, we set $E = 100$ and $K = 10$ for all the $\mathrm{IPC}_{\mathbf{F}}$ and $\mathrm{IPC}_{\mathbf{T}}$ . For CIFAR-100 and ImageNet-10, we set $E = 200$ and $K = 10$ for all the $\mathrm{IPC}_{\mathbf{F}}$ and $\mathrm{IPC}_{\mathbf{T}}$ . 2) Balanced construction. We use $S_{A}$ in Eq. 5 to achieve a balanced construction. Following IDC [19], we leverage a multi-formation framework to increase the synthetic data quantity while preserving the storage budget. Specifically, an IDC-condensed image is composed of $n^2$ patches. Each patch is derived from one original image with the resolution scaled down by a factor of $1/n^2$ . Here, $n$ is referred to as the "factor" in the multi-formation process. For CIFAR-10 and CIFAR-100 datasets, $n = 2$ ; for ImageNet-10 dataset, $n = 3$ . We create balanced classes according to these patches. As a result, all the classes have the same number of samples. 3) Flexible resizing. For datasets with $\mathrm{IPC}_{\mathbf{F}} = 10$ and $\mathrm{IPC}_{\mathbf{F}} = 20$ , we select $\mathrm{IPC}_{\mathbf{T}}$ of 1, 2, and 5. For $\mathrm{IPC}_{\mathbf{F}} = 50$ , we select $\mathrm{IPC}_{\mathbf{T}}$ of 1, 2, 5, and 10. For a condensed dataset with $\mathrm{IPC}_{\mathbf{F}}$ , the performance of its flexible resizing is indicated by the average accuracy across different $\mathrm{IPC}_{\mathbf{T}}$ values. + +Comparison Baselines. We have two sets of baselines for comparison: 1) dataset condensation methods including IDC[19], DREAM[24], MTT [1], DSA [50] and KIP [32] and 2) dataset pruning methods including SSP [40], Entropy [3], AUM [35], Forgetting [41], EL2N [34], and CCS [54]. For dataset condensation methods, we use a random subset as the baseline. For dataset pruning methods, their specific metrics are used to rank and prune datasets to the required size. + +# 4.2 Primary Results + +Tab. 1 provides a comprehensive comparison of different methods for flexibly resizing datasets from an initial $\mathrm{IPC}_{\mathbf{F}}$ to a target $\mathrm{IPC}_{\mathbf{T}}$ . In this table, we have not included ImageNet results on DREAM [24] since it only reports on Tiny ImageNet with a resolution of $64\times 64$ , in contrast to ImageNet's $224\times 224$ . The third column of the table shows the accuracy of the condensed dataset at the parameter $\mathrm{IPC}_{\mathbf{F}}$ . We then flexibly resize the dataset from $\mathrm{IPC}_{\mathbf{F}}$ to $\mathrm{IPC}_{\mathbf{T}}$ . The blue area represents the average accuracy across different $\mathrm{IPC}_{\mathbf{T}}$ values. For instance, consider the CIFAR-10 dataset with $\mathrm{IPC}_{\mathbf{F}} = 10$ . Resizing it to $\mathrm{IPC}_{\mathbf{F}} = 1, 2$ , and 5 using our method yields accuracies of $42.28\%$ , $46.67\%$ , and $55.96\%$ , respectively. The average accuracy of these three values is $48.30\%$ . This value surpasses the $37.08\%$ accuracy of SSP [40] by a considerable margin of $11.22\%$ . + +Ablation Study. Tab. 2 shows the ablation study of the LBPE score and the balanced construction across dataset condensation methods. In the first row, the baseline results are shown where neither the LBPE score nor the balanced construction is applied. "Balanced only" (second row) indicates the selection method is random selection and the class distribution is balanced. "LBPE only" (third row) means the + +Table 2: Ablation study on two rules. (CIFAR-10: $\mathrm{IPC}_{10\rightarrow 1}$ ) + +
LBPEBalancedIDC [19]DREAM [24]MTT [1]KIP [32]
--28.2330.8719.7514.06
-30.1932.8319.0916.27
-39.3837.3020.3715.78
42.2842.2922.0222.24
+ +Table 1: IPC means “images per class”. Flexibly resize dataset from $\mathrm{IPC}_{\mathbf{F}}$ to $\mathrm{IPC}_{\mathbf{T}}$ ( $\mathrm{IPC}_{\mathbf{F} \rightarrow \mathbf{T}}$ ). The blue areas represent the average accuracy of listed $\mathrm{IPC}_{\mathbf{T}}$ datasets for different values of $\mathbf{T}$ . The gray areas indicate the accuracy difference between the corresponding methods and ours. + +
Dataset\( {\mathrm{{IPC}}}_{\mathrm{F}} \)Acc.\( {\mathrm{{IPC}}}_{\mathrm{T}} \)CondensationPruning Method
IDC[19]DREAM[24]SSP[40]Entropy[3]AUM[35]Forg.[41]EL2N[34]CCS[54]Ours
CIFAR-101067.50128.2330.8727.8330.3013.3016.6816.9533.5442.28
237.1038.8834.9538.8818.4422.1323.2639.2046.67
552.9254.2348.4752.8541.4045.4946.5853.2355.96
Avg.39.4241.3337.0840.6824.3828.1028.9341.9948.30
Diff.-8.89-6.98-11.22-7.63-23.92-20.20-19.37-6.31-
5074.50129.4527.6128.9917.957.2112.237.9531.2838.77
234.2736.1134.5124.468.6712.179.4738.7144.54
545.8548.2846.3834.1212.8515.5516.0348.1953.04
1057.7159.1156.8147.6122.9227.0131.3356.8061.10
Avg.41.8242.7841.6731.0412.9116.7416.2043.7549.36
Diff.-7.54-6.58-7.69-18.33-36.45-32.62-33.17-5.62-
CIFAR-1001045.40114.7815.0514.9411.283.646.455.1218.9722.57
222.4921.7820.6516.785.9310.038.1525.2729.09
534.9035.5430.4829.9617.3221.4522.4036.0138.51
Avg.24.0624.1222.0219.348.9612.6411.8926.7530.06
Diff.-6.00-5.93-8.03-10.72-21.09-17.41-18.17-3.31-
2049.50113.9213.2614.655.752.967.594.5918.7223.74
220.6220.4120.278.633.9610.646.1824.0829.93
531.2131.8130.3417.518.2517.6311.7632.8138.02
Avg.21.9221.8321.7510.635.0611.957.5125.2030.56
Diff.-8.65-8.74-8.81-19.93-25.51-18.61-23.05-5.36-
5052.60113.4113.3615.901.862.799.034.2119.0523.47
220.3819.9721.262.863.0412.665.0124.3229.59
529.9229.8829.636.044.5620.237.2431.9337.52
1037.7937.8536.9713.318.5629.1111.7238.0542.79
Avg.25.3825.2725.946.024.7417.767.0528.3433.34
Diff.-7.97-8.08-7.40-27.33-28.61-15.59-26.30-5.01-
ImageNet-101072.80144.93-45.6940.9817.8432.0741.0044.2753.91
257.84-58.4752.0429.1344.8954.4756.5359.69
567.20-63.1164.6044.5655.1365.8767.3664.47
Avg.56.66-55.7652.5430.5144.0353.7856.0559.36
Diff.2.70-3.606.8228.8515.335.583.31-
2076.60142.00-43.1336.1314.5124.9824.0934.6453.07
253.93-54.8246.9119.0931.2733.1642.2258.96
559.56-61.2756.4427.7836.4446.0257.1164.38
Avg.51.83-53.0746.4920.4630.9034.4244.6658.80
Diff.6.97-5.7312.3138.3427.9024.3814.14-
+ +LBPE score is used for ranking samples, and the selection results are purely based on the LBPE score without considering the class balance. "LBPE + Balanced" indicates that both elements are included for sample selection. The empirical findings conclusively affirm the effectiveness of the two rules, which constitute the principal contributions of our YOCO method. + +Standard Deviation of Experiments. Different training dynamics and network initializations impact the final results. Therefore, the reported results are averaged over three different training dynamics, and each training dynamic is evaluated based on three different network initializations. See Appendix B.2 for the primary results table with standard deviation. + +# 4.3 Analysis of Two Rules + +# 4.3.1 Analysis of LBPE Score for Sample Ranking + +Tab. 3 illustrates the robust performance of our YOCO method across diverse network structures, including ConvNet, ResNet, and DenseNet, demonstrating its strong generalization ability. Additionally, we present different sample ranking metrics from dataset pruning methods on these networks, demonstrating that our method outperforms both random selection and other data pruning methods. + +In Tab. 4, we experiment with prioritizing easy samples over hard ones. We achieve this by reversing the importance metrics introduced by AUM [35], Forg. [41], and EL2N [34] that originally prioritize + +Table 3: Accuracies on different network structures and different sample ranking metrics. (IDC [19] condensed CIFAR-10: $\mathrm{IPC}_{10\rightarrow 1}$ ) + +
ConvNet [12]ResNet [13]DenseNet [14]
Random28.2324.1424.63
SSP[40]27.8324.6424.75
Entropy[3]30.3030.5329.93
AUM[35]13.3015.0414.56
Forg.[41]16.6816.7517.43
EL2N[34]16.9519.9821.43
Ours42.2834.5334.29
+ +Table 4: Prioritizing easy samples is better for different dataset pruning and dataset condensation methods. "R?" represents whether to reverse the metrics which prioritize hard samples. (CIFAR-10: $\mathrm{IPC}_{10\rightarrow 1}$ ) + +
MethodR?IDC[19]DREAM[24]MTT[1]DSA[50]
AUM [35]-13.3014.4315.3314.25
AUM [35]37.9738.1816.6318.23
Forg. [41]-16.6816.2618.8216.55
Forg. [41]36.6936.1516.6517.03
EL2N [34]-16.9518.1316.9813.14
EL2N [34]33.1134.3619.0121.29
Ours-42.2842.2922.0222.40
+ +Table 5: Balanced construction works on different dataset pruning methods. “ $\mathcal{B}$ ?” represents whether to use balanced construction. The subscript $^+\mathrm{value}$ indicates the accuracy gain from balanced construction. (IDC [19] condensed CIFAR-10: $\mathrm{IPC}_{10\rightarrow T}$ ) + +
\(IPC_T\)\(\mathcal{B}?\)RandomSSP [40]Entropy [3]AUM [35]Forg. [41]EL2N [34]Ours
IPC1-28.2327.8330.3013.3016.6816.9537.63
\(\checkmark\)\(30.05+1.82\)\(33.21+5.38\)\(33.67+3.37\)\(15.64+2.34\)\(19.09+2.41\)\(18.43+1.48\)\(42.28+4.65\)
IPC2-37.1034.9538.8818.4422.1323.2642.99
\(\checkmark\)\(39.44+2.34\)\(40.57+5.62\)\(42.17+3.29\)\(23.84+5.40\)\(28.06+5.93\)\(26.54+3.28\)\(46.67+3.68\)
IPC5-52.9248.4752.8541.4045.4946.5853.86
\(\checkmark\)\(52.64-0.28\)\(49.44+0.97\)\(54.73+1.88\)\(47.23+5.83\)\(48.02+2.53\)\(48.86+2.28\)\(55.96+2.10\)
+ +hard samples. Our results indicate that across various condensed datasets, including IDC [19], DREAM [24], MTT [1], and DSA [50], there is a distinct advantage in prioritizing easier samples over harder ones. These findings lend support to our Rule 1. + +# 4.3.2 Analysis of Balanced Construction + +Fig. 2 presents the class distributions with and without a balanced construction for different datasets and different $\mathrm{IPC}_{\mathbf{F}\rightarrow \mathbf{T}}$ . As explained in YOCO settings, our balanced construction is based on the multi-formation framework from IDC [19]. Therefore, the x-axis represents the count of images after multi-formation instead of the condensed images. It is evident that a ranking strategy relying solely on the LBPE score can result in a significant class imbalance, particularly severe in the ImageNet dataset. As depicted in Fig. 2(f), three classes have no image patches left. Our balanced construction method effectively mitigates this issue. Notably, in the case of ImageNet- $10_{10\to 1}$ , the balanced construction boosts the accuracy by an impressive $19.37\%$ . + +To better understand the impact of balanced class distribution on various dataset pruning methods, we conducted a comparative analysis, as presented in Tab. 5. Clearly, achieving a balanced class distribution significantly enhances the performance of all examined methods. Remarkably, our proposed method consistently outperforms others under both imbalanced and balanced class scenarios, further substantiating the efficacy of our approach. + +# 4.4 Other Analysis + +Sample Importance Rules Differ between Condensed Dataset and Full Dataset. In Fig. 3, we compare Sample Importance Rules for Condensed Datasets $\left(\mathrm{IPC}_{10}, \mathrm{IPC}_{50}\right)$ and the Full Dataset $\left(\mathrm{IPC}_{5000}\right)$ , by adjusting the pruning ratio from $10\%$ to $90\%$ . Unmarked solid lines mean prioritizing easy samples, dashed lines suggest prioritizing hard samples, while marked solid lines depict the accuracy disparity between the preceding accuracies. Therefore, the grey region above zero indicates "Prefer easy samples" (Rule\_easy), while the blue region below zero represents "Prefer hard samples" (Rule\_hard). We have two observations. First, as the pruning ratio increases, there is a gradual transition from Rule\_hard to Rule\_easy. Second, the turning point of this transition depends on the dataset size. Specifically, the turning points for $\mathrm{IPC}_{10}$ , $\mathrm{IPC}_{50}$ , and $\mathrm{IPC}_{5000}$ occur at pruning + +![](images/6a063a2d4c08d6e583d686e2512a1b87f81303571f9ca05037452bca6d00c16a.jpg) + +![](images/563343b152475ca37578f151c89481a093377c2871051007a93d70a73ae23049.jpg) + +![](images/52d4c3c370bc5f718c892b9870459f7d6c9b56ec535a243e5c1e3f10d360b29c.jpg) + +![](images/534bbc2e6d4b86bbd42134c5160a04e8398e12ce0321918637accdea345e212d.jpg) +(d) CIFAR- $10_{50\to 1}$ + +![](images/bf539d00b301528a65c0521335011ea2163a52917aa5e0123a65f3240f4d24e6.jpg) +(e) CIFAR-100 $_{10 \to 1}$ + +![](images/ac9592403d704661f2373ef886086713a0f7fa9986a6041bbc8ae546ef54dcc3.jpg) +(f) ImageNet- $10_{10\to 1}$ + +![](images/76d60d0958396bb129caac68dc83a9930b10c157822d5e15bdd7ef4ba4211876.jpg) +Figure 2: Balanced and imbalanced selection by ranking samples with LBPE score. $\mathrm{Dataset}_{\mathbf{F} \rightarrow \mathbf{T}}$ denotes resizing the dataset from $\mathrm{IPC}_{\mathbf{F}}$ to $\mathrm{IPC}_{\mathbf{T}}$ . Accuracies for each setting are also listed in the legend. (IDC [19] condensed datasets) +Figure 3: Different sample importance rules between condensed datasets and full datasets. + +ratios of $24\%$ , $38\%$ , and $72\%$ , respectively. These experimental outcomes substantiate our Rule 1 that condensed datasets should adhere to Ruleeasy. + +Performance Gap from Multi-formation. We would like to explain the huge performance gap between multi-formation-based methods (IDC [19] and DREAM [24]) and other methods (MTT [1], KIP [33], and DSA [50]) in Tab. 2 and Tab. 4. The potential reason is that a single image can be decoded to $2^{2} = 4$ low-resolution images via multi-formation. As a result, methods employing multi-formation generate four times as many images compared to those that do not use multi-formation. The illustration is shown in Fig. 4. + +![](images/34884b31635fc1ae6de40875fb655ba0cbf24c86cd4fcd955dd9a44043d28803.jpg) +Figure 4: Illustration of the multi-formation with a factor of 2. (Taken from IDC [19]) + +Why Use LBPE Score from the Top- $K$ Training Epochs with the Highest Accuracy? As shown in Eq. 2, different training epoch $t$ leads to a different LBPE score. Fig. 5 illustrates the accuracy of the dataset selected via the LBPE score across specific training epochs. We select LBPE scores from the initial 100 epochs out of 1000 original epochs to reduce computational costs. We have two observations. First, the model's accuracy during the first few epochs is substantially low. LBPE scores derived from these early-stage epochs might not accurately represent the samples' true importance since the model is insufficiently trained. Second, there's significant variance in accuracy even after 40 epochs, leading to potential instability in the LBPE score selection. To address this, we average LBPE scores from epochs with top- $K$ accuracy, thereby reducing variability and ensuring a more reliable sample importance representation. + +Speculating on Why LBPE Score Performs Better at Certain Epochs? In Fig. 7, we present the distribution of LBPE scores at various training epochs, with scores arranged in ascending order for each class to facilitate comparison across epochs. Our experiment finds the LBPE scores decrease as the epoch number increases. The superior accuracy of $\mathrm{LBPE}_{90}$ is due to two reasons. First, the model at the $90_{th}$ epoch is more thoroughly trained than the model at the first epoch, leading to more accurate LBPE scores. Second, the $\mathrm{LBPE}_{90}$ score offers a more uniform distribution and a wider range [0, 1], enhancing sample distinction. In contrast, the $\mathrm{LBPE}_{1000}$ score is mostly concentrated within a narrow range [0, 0.1] for the majority of classes, limiting differentiation among samples. More possible reasons will be explored in future studies. + +Visualization. Fig. 6 visualizes the easy and hard samples identified by our YOCO method. We notice that most easy samples have a distinct demarcation between the object and its background. This is particularly evident in the classes of "vacuum cleaner" and "cocktail shaker". The easy samples in + +![](images/7939d08127815522f6d7ee516d6796a964dafbbd01f40e836bdc3d3578d3fc35.jpg) +Figure 5: Accuracy of the dataset selected with LBPE score at specific epochs. + +![](images/04a170c3eada3b614b75290672784421ff85af4d18ecd64c8256cf26dcf9726e.jpg) +Figure 6: Visualization of hard and easy samples of ImageNet dataset selected by our method. Both the original and condensed images are shown for comparison. + +![](images/c94b1719fda761af67bfa8a637c790c8169fa2c3c326d98147c4b5ec91ee0a26.jpg) +Figure 7: LBPE scores at different epochs (LBPEepoch) for ten classes of the CIFAR-10 dataset. + +these two classes have clean backgrounds, while the hard samples have complex backgrounds. The visualization provides evidence of our method's ability to identify easy and hard samples. + +# 5 Conclusion, Limitation and Future Work + +We introduce You Only Condense Once (YOCO), a novel approach that resizes condensed datasets flexibly without an extra condensation process, enabling them to adjust to varying computational constraints. YOCO comprises two key rules. First, YOCO employs the Logit-Based Prediction Error (LBPE) score to rank the importance of training samples and emphasizes the benefit of prioritizing easy samples with low LBPE scores. Second, YOCO underscores the need to address the class imbalance in condensed datasets and utilizes Balanced Construction to solve the problem. Our experiments validated YOCO's effectiveness across different networks and datasets. These insights offer valuable directions for future dataset condensation and dataset pruning research. + +We acknowledge several limitations and potential areas for further investigation. First, although our method uses early training epochs to reduce computational costs, determining the sample importance in the first few training epochs or even before training is interesting for future work. Second, we only utilize the LBPE score to establish the importance of samples within the dataset. However, relying on a single metric might not be the optimal approach. There are other importance metrics, such as SSP [40] and AUM [35], that could be beneficial to integrate into our methodology. Third, as our current work only covers clean datasets like CIFAR-10, the performance of our method on noisy datasets requires further investigation. + +The border impact is shown in Appendix C. + +# 6 Acknowledgement + +This work is partially supported by Joey Tianyi Zhou's A*STAR SERC Central Research Fund (Use-inspired Basic Research), the Singapore Government's Research, Innovation and enterprise 2020 Plan (Advanced Manufacturing and Engineering domain) under Grant A18A1b0045, and A*STAR CFAR Internship Award for Research Excellence (CIARE). The computational work for this article was partially performed on resources of the National Supercomputing Centre (NSCC), Singapore (https://www.nscc.sg). + +# References + +[1] G. Cazenavette, T. Wang, A. Torralba, A. A. Efros, and J.-Y. Zhu. Dataset distillation by matching training trajectories. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022. +[2] K. Chitta, J. M. Álvarez, E. Haussmann, and C. Farabet. Training data subset search with ensemble active learning. IEEE Transactions on Intelligent Transportation Systems, 23(9):14741-14752, 2021. +[3] C. Coleman, C. Yeh, S. Mussmann, B. Mirzasoleiman, P. Bailis, P. Liang, J. Leskovec, and M. Zaharia. Selection via proxy: Efficient data selection for deep learning. In Proc. Int. Conf. Learn. Represent., 2020. +[4] J. Cui, R. Wang, S. Si, and C.-J. Hsieh. Scaling up dataset distillation to imagenet-1k with constant memory. arXiv preprint arXiv:2211.10586, 2022. +[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., pages 248–255, 2009. +[6] Z. Deng and O. Russakovsky. Remember the past: Distilling datasets into addressable memories for neural networks. In Proc. Adv. Neural Inform. Process. Syst., 2022. +[7] J. Du, Y. Jiang, V. T. Tan, J. T. Zhou, and H. Li. Minimizing the accumulated trajectory error to improve dataset distillation. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023. +[8] M. Ducoffe and F. Precioso. Adversarial active learning for deep networks: a margin based approach. arXiv preprint arXiv:1802.09841, 2018. +[9] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, page 569-578, 2011. +[10] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca, and projective clustering. SIAM Journal on Computing, 49(3):601-657, 2020. +[11] V. Feldman and C. Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. In Proc. Adv. Neural Inform. Process. Syst., pages 2881-2891, 2020. +[12] S. Gidaris and N. Komodakis. Dynamic few-shot visual learning without forgetting. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., pages 4367-4375, 2018. +[13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778, 2016. +[14] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., pages 4700-4708, 2017. +[15] L. Huang and N. K. Vishnoi. Coresets for clustering in euclidean spaces: Importance sampling is nearly optimal. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, page 1416-1429, 2020. +[16] Z. Jiang, J. Gu, M. Liu, and D. Z. Pan. Delving into effective gradient matching for dataset condensation. arXiv preprint arXiv:2208.00311, 2022. + +[17] K. Killamsetty, S. Durga, G. Ramakrishnan, A. De, and R. Iyer. Grad-match: Gradient matching based data subset selection for efficient deep model training. In Proc. Int. Conf. Mach. Learn., pages 5464-5474, 2021. +[18] K. Killamsetty, D. Sivasubramanian, G. Ramakrishnan, and R. Iyer. Glister: Generalization based data subset selection for efficient and robust learning. In Proc. AAAI Conf. Artif. Intell., pages 8110-8118, 2021. +[19] J.-H. Kim, J. Kim, S. J. Oh, S. Yun, H. Song, J. Jeong, J.-W. Ha, and H. O. Song. Dataset condensation via efficient synthetic-data parameterization. In Proc. Int. Conf. Mach. Learn., 2022. +[20] A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Technical report, CiteSeer, 2009. +[21] S. Lee, S. Chun, S. Jung, S. Yun, and S. Yoon. Dataset condensation with contrastive signals. In Proc. Int. Conf. Mach. Learn., pages 12352-12364, 2022. +[22] D. D. Lewis. A sequential algorithm for training text classifiers: Corrigendum and additional data. In Acm Sigir Forum, pages 13-19, 1995. +[23] S. Liu, K. Wang, X. Yang, J. Ye, and X. Wang. Dataset distillation via factorization. In Proc. Adv. Neural Inform. Process. Syst., 2022. +[24] Y. Liu, J. Gu, K. Wang, Z. Zhu, W. Jiang, and Y. You. DREAM: Efficient dataset distillation by representative matching. arXiv preprint arXiv:2302.14416, 2023. +[25] N. Loo, R. Hasani, A. Amini, and D. Rus. Efficient dataset distillation using random feature approximation. In Proc. Adv. Neural Inform. Process. Syst., 2022. +[26] N. Loo, R. Hasani, M. Lechner, and D. Rus. Dataset distillation with convexified implicit gradients. arXiv preprint arXiv:2302.06755, 2023. +[27] J. Lorraine, P. Vicol, and D. Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. In International Conference on Artificial Intelligence and Statistics, pages 1540-1552, 2020. +[28] K. Margatina, G. Vernikos, L. Barrault, and N. Aletras. Active learning by acquiring contrastive examples. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 650–663, 2021. +[29] K. Meding, L. M. S. Buschoff, R. Geirhos, and F. A. Wichmann. Trivial or impossible — dichotomous data difficulty masks model differences (on imagenet and beyond). In Proc. Int. Conf. Learn. Represent., 2022. +[30] B. Mirzasoleiman, J. Bilmes, and J. Leskovec. Coresets for data-efficient training of machine learning models. In Proc. Int. Conf. Mach. Learn., pages 6950-6960, 2020. +[31] M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-iid processes. In Proc. Adv. Neural Inform. Process. Syst., 2008. +[32] T. Nguyen, Z. Chen, and J. Lee. Dataset meta-learning from kernel ridge-regression. In Proc. Int. Conf. Learn. Represent., 2021. +[33] T. Nguyen, R. Novak, L. Xiao, and J. Lee. Dataset distillation with infinitely wide convolutional networks. In Proc. Adv. Neural Inform. Process. Syst., pages 5186-5198, 2021. +[34] M. Paul, S. Ganguli, and G. K. Dziugaite. Deep learning on a data diet: Finding important examples early in training. In Proc. Adv. Neural Inform. Process. Syst., pages 20596-20607, 2021. +[35] G. Pleiss, T. Zhang, E. Elenberg, and K. Q. Weinberger. Identifying mislabeled data using the area under the margin ranking. In Proc. Adv. Neural Inform. Process. Syst., pages 17044-17056, 2020. + +[36] O. Pooladzandi, D. Davini, and B. Mirzasoleiman. Adaptive second order coresets for data-efficient machine learning. In Proc. Int. Conf. Mach. Learn., pages 17848-17869, 2022. +[37] P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, B. B. Gupta, X. Chen, and X. Wang. A survey of deep active learning. ACM Comput. Surv., 54(9), oct 2021. +[38] B. Settles. Active Learning. Springer International Publishing, 2012. +[39] S. Shin, H. Bae, D. Shin, W. Joo, and I.-C. Moon. Loss-curvature matching for dataset selection and condensation. In International Conference on Artificial Intelligence and Statistics, pages 8606-8628, 2023. +[40] B. Sorscher, R. Geirhos, S. Shekhar, S. Ganguli, and A. Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. In Proc. Adv. Neural Inform. Process. Syst., pages 19523-19536, 2022. +[41] M. Toneva, A. Sordoni, R. T. des Combes, A. Trischler, Y. Bengio, and G. J. Gordon. An empirical study of example forgetting during deep neural network learning. In Proc. Int. Conf. Learn. Represent., 2019. +[42] P. Vicol, J. P. Lorraine, F. Pedregosa, D. Duvenaud, and R. B. Grosse. On implicit bias in overparameterized bilevel optimization. In Proc. Int. Conf. Mach. Learn., pages 22234-22259, 2022. +[43] K. Wang, J. Gu, D. Zhou, Z. Zhu, W. Jiang, and Y. You. Dim: Distilling dataset into generative model. arXiv preprint arXiv:2303.04707, 2023. +[44] K. Wang, B. Zhao, X. Peng, Z. Zhu, S. Yang, S. Wang, G. Huang, H. Bilen, X. Wang, and Y. You. Cafe: Learning to condense dataset by aligning features. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., pages 12196-12205, 2022. +[45] T. Wang, J.-Y. Zhu, A. Torralba, and A. A. Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018. +[46] M. Welling. Herding dynamical weights to learn. In Proc. Int. Conf. Mach. Learn., pages 1121-1128, 2009. +[47] X. Xia, J. Liu, J. Yu, X. Shen, B. Han, and T. Liu. Moderate coreset: A universal method of data selection for real-world data-efficient deep learning. In Proc. Int. Conf. Learn. Represent., 2023. +[48] S. Yang, Z. Xie, H. Peng, M. Xu, M. Sun, and P. Li. Dataset pruning: Reducing training data by examining generalization influence. In Proc. Int. Conf. Learn. Represent., 2023. +[49] L. Zhang, J. Zhang, B. Lei, S. Mukherjee, X. Pan, B. Zhao, C. Ding, Y. Li, and D. Xu. Accelerating dataset distillation via model augmentation. In Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023. +[50] B. Zhao and H. Bilen. Dataset condensation with differentiable siamese augmentation. In Proc. Int. Conf. Mach. Learn., pages 12674-12685, 2021. +[51] B. Zhao and H. Bilen. Synthesizing informative training samples with GAN. In NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research, 2022. +[52] B. Zhao and H. Bilen. Dataset condensation with distribution matching. In Proc. IEEE Winter Conf. Appl. Comput. Vis., pages 6514-6523, 2023. +[53] B. Zhao, K. R. Mopuri, and H. Bilen. Dataset condensation with gradient matching. In Proc. Int. Conf. Learn. Represent., 2021. +[54] H. Zheng, R. Liu, F. Lai, and A. Prakash. Coverage-centric coreset selection for high pruning rates. In Proc. Int. Conf. Learn. Represent., 2023. +[55] Y. Zhou, E. Nezhadarya, and J. Ba. Dataset distillation using neural feature regression. In Proc. Adv. Neural Inform. Process. Syst., 2022. \ No newline at end of file diff --git a/youonlycondenseoncetworulesforpruningcondenseddatasets/images.zip b/youonlycondenseoncetworulesforpruningcondenseddatasets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d5c1133df76ea18bda03c04ae98ec7da96462640 --- /dev/null +++ b/youonlycondenseoncetworulesforpruningcondenseddatasets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:430fb3b87f7f581be26686807234ad39a461f1518be818181af25e588ef8497f +size 668109 diff --git a/youonlycondenseoncetworulesforpruningcondenseddatasets/layout.json b/youonlycondenseoncetworulesforpruningcondenseddatasets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d17cd7a0427fe2db52ac30cfcd44ff4a224988d7 --- /dev/null +++ b/youonlycondenseoncetworulesforpruningcondenseddatasets/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d2357b04123bf641bf8c692947791b276f9a5ed94dce4a39d5f3e03ba3b91c9 +size 533760 diff --git a/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_content_list.json b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..296cd9015a2d4488544817a5195624ad932e5c36 --- /dev/null +++ b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20e7dbfb6384309701b42562485e65adc383ad4a6a909d64c95bd7b0a7065ae7 +size 174886 diff --git a/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_model.json b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a2d4e227bd632d97ec6a9a0bb1d86d636264f954 --- /dev/null +++ b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18faaa2fa0397c1531f100b6a7224d393b143d79a60fb8da18b6814c0189c657 +size 207448 diff --git a/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_origin.pdf b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..eaa7793e4a25f3c41728fda48cba1da120470e2a --- /dev/null +++ b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/e5ad109c-cd83-439a-9c26-a0e687ebd28a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15b280395b3ae70df872b2b6328270a395d5ff6323b1da6d46c2af3ed922f82f +size 1451431 diff --git a/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/full.md b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b6db97b38759d9106bb9d2a67633d493b8f2b1a8 --- /dev/null +++ b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/full.md @@ -0,0 +1,705 @@ +# Your representations are in the network: composable and parallel adaptation for large scale models + +Yonatan Dukler Alessandro Achille Hao Yang Benjamin Bowman* Varsha Vivek + +Luca Zancato Avinash Ravichandran Charless Fowlkes Ashwin Swaminathan + +# Stefano Soatto + +# AWS AI Labs + +{dukler, aachille, haoyng, bowmaben, varshviv, zancato, ravinash, fowlkec, swashwin, soattos}@amazon.com + +# Abstract + +We present a framework for transfer learning that efficiently adapts a large base-model by learning lightweight cross-attention modules attached to its intermediate activations. We name our approach InCA (Introspective-Cross-Attention) and show that it can efficiently survey a network's representations and identify strong performing adapter models for a downstream task. During training, InCA enables training numerous adapters efficiently and in parallel, isolated from the frozen base model. On the ViT-L/16 architecture, our experiments show that a single adapter, $1.3\%$ of the full model, is able to reach full fine-tuning accuracy on average across 11 challenging downstream classification tasks. Compared with other forms of parameter-efficient adaptation, the isolated nature of the InCA adaptation is computationally desirable for large-scale models. For instance, we adapt ViT-G/14 (1.8B+ parameters) quickly with $20+$ adapters in parallel on a single V100 GPU (76% GPU memory reduction) and exhaustively identify its most useful representations. We further demonstrate how the adapters learned by InCA can be incrementally modified or combined for flexible learning scenarios and our approach achieves state of the art performance on the ImageNet-to-Sketch multi-task benchmark. + +# 1 Introduction + +Foundation models promise to achieve top performance with minimal adaptation on any downstream task. In the realm of language, the data and the hypothesis spaces are shared, and many tasks can be unified into a homogeneous representation. Visual inference domains, on the other hand, can be highly heterogeneous and possibly antagonistic. For instance, the hypothesis space for pose estimation is geometric, whereas for scene classification it is semantic and even domains that appear homogeneous, such as image classification into the 10 CIFAR classes, can trigger interference in the trained model in the presence of modest perturbations of the image statistics. + +Antagonistic domains may interfere within the activations of the latter layers, which cannot be simultaneously minimal and sufficient for all domains. However, information about a dissimilar domain may be present in earlier layers, and certainly in the input data which is trivially sufficient + +![](images/ba73a0e4a19671fd2b4cc0ac89fbdcc1c0cf6c6a7e18b4e1ced0df633f3c7b7c.jpg) +Figure 1: (Left) Top-1 Test Error for fine-grained classification transfer learning tasks evaluated with the ViT-L/16 architecture. InCA performs comparable to full fine-tuning on each challenging dataset. (Right) Max GPU Memory usage during training for different adaptation approaches and model sizes. + +![](images/14c859eff22fba51f30b3466a5677d48a7e796868b2035ead41fc826ad35a901.jpg) + +![](images/e4b582eb7358744c16811613cf47d8c32c406780e6235813c39ad3de19ff8fe4.jpg) +Figure 2: InCA Adaptation In (B), intermediate activation maps are extracted from a pretrained backbone during forward pass. Each activation map is passed to a lightweight InCA adapter (shown in (C)) or Open-InCA adapter (shown in (D)) depending on the task settings. In (A), we illustrate how multiple adapters are trained in parallel and independently, and during inference can be combined or used in parallel. In (C), (D) we present a schema of the InCA and Open-InCA adapters; see Sec. 3 for details. + +for any task. Indeed, as opposed to just operating with the final model's representations, the typical approach of addressing domain variability in transfer learning is by applying full fine-tuning of the model on new data. By optimizing all of the model's parameters, each of the model representations can be potentially harnessed as a starting point for the adaptation. + +While accurate and robust to a variety of domains, full fine-tuning of large-scale models entails sweeping computational and storage costs. Further, the resultant model can only function for its dedicated task, not allowing for shared computation and parallel execution of potential future tasks. To tackle the problem of efficient, versatile, and modular adaptation we introduce Introspective Cross-Attention (InCA). InCA operates on the base-model by attaching isolated shallow adapter modules utilizing any of the activation maps for downstream tasks. + +Since modern architectures are deep and so is the variety of possible downstream tasks, iterating over all the possible candidate activations of a model is prohibitively expensive. Instead in InCA we search for useful model representations exhaustively and in parallel by training numerous isolated adapters attached to different activations. When using InCA, each adapter is light and the base-model is fixed and does not require any backpropagation. This makes InCA computationally efficient in terms of GPU memory which is crucial for scaling to large models. Further, the shallow InCA adapter networks simplify the training dynamics as compared with existing approaches which speeds up training considerably and makes the optimization straightforward and robust (see Appendix C). + +In detail, during parallel training of InCA, a set of adapters sharing the same architecture are trained simultaneously and independently on a downstream task. Each adapter accepts an assigned activation and does not feed back to the backbone, leaving the backbone execution unaltered during both training and inference (Fig. 2). At inference, the learned adapters may be combined or a best performing adapter can be selected for downstream prediction. The InCA adapter architecture is simple, consisting of a single cross-attention module followed by a linear classifier for the downstream task. Despite the simplicity of the adapter, we observe that a single top-performing adapter trained for a downstream + +task is capable of achieving strong performance across a variety of different architectures and pretrainings when tested on a diverse set of visual domains. Because our approach does not modify the execution of the model or any of the pre-trained layers as done in existing parameter-efficient approaches [28, 34, 27], our method can be automatically applied to any architecture without the hassle of re-implementation and architecture specific modifications. In particular, we present results of InCA for ViT [18], SWIN [48], and CNN architectures [49, 74] on a suite of fine-grained recognition tasks. + +Since the adapters learned in InCA are shallow and small (1.3% of the parameters on ViT-L/16), the strong performance of a single adapter implies that many pre-trained models already contain strong representations for a diverse set of downstream tasks. Instead, previous approaches like linear probing fall short in using the representations, not because the task can not be solved using the existing model, but rather because of not having the right "extraction capacity" of cross-attention; we explore this systematically in Sec. 4 and present a theoretical proof for the advantage of the cross-attention layer in Appendix D. For challenging datasets, using intermediate representations as opposed to only adapting the last layer's representation is key to making InCA a versatile method that closes the gap with full fine-tuning on diverse tasks (See Fig. 1). + +A byproduct of the exhaustive approach of InCA adaptation is a signature describing the performance of the internal representations of a network on different downstream tasks. This renders InCA as a powerful tool for understanding the underlying representations of pre-trained models and task space [1] (See Sec. 5, Appendix B). Curiously, we observe that in certain downstream datasets, different pre-trained models hold similar performance signatures, even when the pre-trained models use different architectures or pre-training augmentations. + +The isolated training of the adapters means no backpropagation through the pre-trained model is taking place. This significantly reduces the correlation between adaptation cost and the pre-trained model size, which makes it possible to leverage very large architectures even with modest compute resources. Fig. 1 shows that one V100 GPU can train InCA with 40 adapters using an architecture as big as ViT-G/14. In contrast, existing parameter efficient approaches that backpropagate through the architecture exhaust GPU memory with any model larger than ViT-B/16. + +# Our contributions are summarized as + +- We introduce InCA, a method that enables versatile downstream adaptation by surveying the internal network representations of any frozen model with lightweight cross-attention based modules as an alternative to full fine-tuning. On ViT-L/16, InCA matches full fine-tuning performance and reaches within $0.7\%$ accuracy for a SWIN-L backbone on average over 11 diverse downstream domains. +- We demonstrate how the modular adapter architecture of InCA enables flexible learning and inference scenarios and present the Open-InCA adapter. With our approach we unlock powerful pre-trained models for reusable and parallel multi-task inference, and class-incremental learning. +- On the efficiency front, InCA scales to massive scale architectures under typical computation budgets while its implementation can be automatically applied to any new model. For example, training ViT-G/14 using InCA results in $76\%$ GPU memory reduction as compared with full fine-tuning. Further, InCA is easy to optimize and highly parameter efficient (See Appendix C). + +The rest of the paper is organized as follows: In Sec. 2 we review related work, and in Sec. 3 we present our approach. We empirically evaluate InCA on a wide set of visual recognition tasks in Sec. 4. Lastly we provide analysis of intermediate representation signatures (Sec. 5) followed by discussion (Sec. 6). Additional results and analysis including Open-InCA are presented in the Appendix. + +# 2 Related works + +Transfer learning Transfer learning in deep neural networks aims at endowing existing models with new "downstream" knowledge. The de-facto approach for transfer learning in deep learning modifies an existing pre-trained network by applying full, partial or linear fine-tuning on the model [37, 66, 45]. Depending on the the variability between the source and downstream task domains, + +different approaches aim at delineating different transfer domains [1, 58, 76], extending coverage of transfer via suitable pre-trainings such as meta-learning [42, 19, 30], or by selecting and combining from a set of expert models [15, 21]. More broadly the field of representation learning focuses on learning transferable representations [38] via different pre-training strategies that can be applied for downstream tasks [11]. + +Efficient adaptation methods In recent times, the top-performing pre-trained model architectures are becoming considerably larger [61, 49], and we arrive at a crossroad as full fine-tuning of such large models is becoming out of reach in many practical settings. Recent works address the storage costs associated with large model transfer by proposing parameter efficient transfer approaches as opposed to storing all of the fine-tuned model parameters [24]. These approaches achieve parameter efficiency by training a subset of existing parameters [6, 45], inserting new weight parameters into existing modules [28, 27, 39, 16] or via additional learnable activations or inputs, known as prompts [34, 65, 43]. Compared with existing work [79, 75], InCA is also parameter efficient, yet we place special emphasis on compute and optimization efficiency, especially in the large-scale model setting. Additional lines of work study learning via selective tuning, enabling multi-domain transfer [70, 22]. + +Feature extraction with attention Self and cross-attention mechanisms aggregate information from a set of feature tokens [69, 41, 44] and create representations based on relative inter-feature importance as computed by the attention mechanism [12, 69]. Self-attention plays a key in the transformer architecture [69] enabling non-local information aggregation. In settings where the number of inputs and desired outputs differ, cross-attention enables flexible aggregation based on a pair of query and key feature sets. When using cross-attention, one can cross-attend between different sets of activations [44, 17] or between a set of activations and learnable latent parameters [32, 9, 80]. In our settings, the adapter architecture applies cross-attention on extracted representations from a pretrained network, inspired by the cross-attention module of Perceiver [32]. However, we train multiple adapters in parallel and avoid the iterative re-sampling architecture present in their work. More generally, cross-attention layers have been vital in many existing object detection and multi-modal systems that fuse activations [44, 8], or apply cross-attention with learnable latents [3, 32, 77, 9, 80]. + +Learning with intermediate features The re-use of intermediate representations in deep learning is vast and spans from works on interpretability [78], to state of the art approaches in object detection that harness intermediate layers for multi-resolution feature pyramids [47, 68, 20] and segmentation [23, 29]. For ConvNets, the work of [2] studies classification utilizing intermediate network representations and the authors observe a decrease in accuracy when probing earlier layers. + +# 3 Method + +We introduce InCA, a lightweight and modular transfer learning alternative to full fine-tuning, that avoids backpropagation through the base-model. Let $f(x) = g_{n} \circ g_{n-1} \circ \ldots \circ g_{1}(x)$ be a pretrained feed-forward neural network of $n$ layers, with $g_{j}(\cdot)$ corresponding to the $j$ -th layer of the network. We denote the activation computed by $g_{j}$ as $f_{j}(x) = g_{j} \circ g_{j-1} \circ \ldots \circ g_{1}(x)$ . During network inference, a "forward" computation processes and computes each $f_{j}(x)$ activation to arrive to the network's final prediction $f(x)$ . During standard training, all of the intermediate activations $\{f_{1}(x), \ldots, f_{n-1}(x), f_{n}(x)\}$ are held in GPU memory and are used to compute gradients to update the model. For large models, this incurs large computational and GPU memory costs [63] which limits using the best and largest available pre-trained models under typical computation budgets. + +Instead, we attach a set of isolated "models" to the pre-trained model $f$ at selected activations $f_{j_k}$ and pass them as input to a set of lightweight and shallow networks $h_k(a)$ with separate parameters and losses. With this, we can train a set of heterogeneous adapters $h_k(a)$ in parallel, while computing inference of the pre-trained model $f$ only once during each update (see Fig. 2). For a set of adapters $h_k(a)$ that take as input intermediate activations from $\{f_{j_k}\}$ training follows as: + +1. Single inference of $f$ through a data batch $x$ which computes $f(x)$ and selected activations $\{f_{j_k}(x)\}$ . + +2. Calculate the batch predictions and losses for each adapter $h_k$ , $\ell_k = \ell(h_k(f_{j_k}(x)), y)$ . +3. Computing $\ell_{\Sigma} = \sum \ell_{k}$ and applying automatic differentiation then efficiently resolves the gradient and updates of each $h_k$ automatically as desired. + +By avoiding backpropagation through the pre-trained $f$ we decouple the majority of the training costs from depending on the size of the base model $f$ and instead the costs correlate with the much smaller adapter set $\{h_k\}$ . Below we demonstrate that even a simple cross-attention module for $h_k$ makes the overall adaptation sufficiently expressive yet highly efficient. + +InCA adapter After extraction of the layer representation $f_{k}(x)$ , we have access to a high-dimensional activation map at our disposal. To predict a target label $\hat{y}$ from the high-dimensional $f_{k}(x)$ , the typical approach is to apply dimension reduction such as averaging (avgpool) or computing maximum values over a subset of the dimensions and then applying a linear classification head: + +$$ +\hat {y} = \operatorname {h e a d} \circ \operatorname {a v g - p o o l} \circ f _ {m} (x). +$$ + +Nonetheless, this simple aggregation approach leads to loss of information which we observe empirically in Sec. 4 and theoretically analyze in Appendix D. Instead, we use a cross-attention module to intelligently aggregate information from the entire large-dimensional activation map $f_{k}(x)$ into a fixed-dimensional representation based on a number of cross-attention queries. Specifically, for standard downstream adaptation, given an intermediate feature map $\mathbf{z} = [z^{1},\dots,z^{T}] = f_{k}(x)$ with $T$ tokens or channels we use the following adapter architecture + +$$ +\begin{array}{l} v _ {\text {c r o s s}} (\mathbf {z}) _ {[ 1: m ]} := \operatorname {c r o s s - a t t n} _ {\theta} ([ z ^ {1}, \dots , z ^ {T} ], [ q _ {1}, \dots , q _ {m} ]) \\ \operatorname {I n C A} _ {\theta} (\mathbf {z}) := \operatorname {h e a d} _ {\theta} \circ \operatorname {n o r m} \left(\operatorname {a v g - p o o l} \left(v _ {\text {c r o s s}} (\mathbf {z}) _ {[ 1: m ]}\right)\right). \\ \end{array} +$$ + +Note that the query tokens $[q_1, \ldots, q_m]$ are optimized along with $\theta$ . The multi-head cross-attention layer outputs $v_{\mathrm{cross}}$ is produced by surveying the feature map $f_k(x)$ with the query tokens $[q_1, \ldots, q_m]$ . Then, the classification output $\hat{y} = \operatorname{InCA}_{\theta}(\mathbf{z})$ is obtained through averaging the cross-attention outputs (if $m > 1$ ) followed by a fully-connected classification head after normalizing with LayerNorm [4]. Based on our experiments, using a single query token $q$ ( $m = 1$ ) achieves strong performance and is computationally efficient and we report results with $m = 1$ unless otherwise stated. + +For more flexible inference such as in the settings of continual and class-incremental learning tasks, we present a modular version of InCA that disentangles the representations learned between different classes, which we refer to as "Open-InCA". For a $c$ -way classification task, define separate queries $[q_1,\dots q_c]$ for each class to compute representations separately, + +$$ +\begin{array}{l} \left[ v _ {\text {c r o s s}} ^ {1} (\mathbf {z}), \dots , v _ {\text {c r o s s}} ^ {c} (\mathbf {z}) \right] := \operatorname {c r o s s - a t t n} _ {\theta} \left(\left[ z ^ {1}, \dots , z ^ {T} \right], \left[ q _ {1}, \dots , q _ {c} \right]\right) \\ \operatorname {O p e n - I n C A} _ {\theta} (\mathbf {z}) := \operatorname {d i a g - h e a d} _ {\theta} \circ \operatorname {n o r m} ([ v _ {\text {c r o s s}} ^ {1} (\mathbf {z}), \dots , v _ {\text {c r o s s}} ^ {c} (\mathbf {z})) ] \\ \end{array} +$$ + +Above, $\mathrm{diag - head}_{\theta}$ is a linear operator layer that operates on a matrix input $[a_1,\dots ,a_c]$ "diagonally". Given a weight parameter $W$ , the operator is defined as the column-wise dot product, + +$$ +\operatorname {d i a g - h e a d} _ {\theta} ([ a _ {1}, \dots a _ {c} ]) = [ \langle W _ {1}, a _ {1} \rangle , \dots \langle W _ {c}, a _ {c} \rangle ]. +$$ + +Open-InCA composition In the Open-InCA adapter architecture, unique queries $[q_1,\dots ,q_c]$ are defined for each class along with diag-head $\theta$ that independently processes each coordinate prediction. Both diag-head, LayerNorm and the cross-attn module in Open-InCA operate on each input $q_{i}$ independently which separates the representation learned for each class and enables isolating each adapter output coordinate as + +$$ +\mathrm {O p e n - I n C A} (\mathbf {z}) _ {i} = \langle W _ {i}, \mathrm {n o r m} (\mathrm {c r o s s - a t t n} _ {\theta} ([ z ^ {1}, \dots , z ^ {T} ], [ q _ {i} ])) \rangle . +$$ + +Above, $W_{i}$ corresponds to the $i$ -th column of diag-head weight. As a result Open-InCA enables class-level modularity with the capabilities of new class insertion, deletion and isolated class updates without regression. For example, deleting class $i$ from the Open-InCA architecture amounts to simply dropping the query and head parameters $q_{i}$ and $W_{i}$ for that coordinate. In the setting of class-incremental learning (CIL) different query-head pairs from Open-InCA can be combined together, as long as the parameters of the norm and cross-attn remain the same. In practice, this leads to the notion of training Open-InCA with fixed norm and cross-attention weight parameters, in what we refer to as "query-only-training". In query-only-training, the learning of a new class + +corresponds to learning just 2, $d$ dimensional parameters per-class and adapter, where $d$ is the token dimension. Nonetheless, when using pre-trained Open-InCA layer parameters, "query-only-training" performs within the accuracy of InCA on many datasets. In Appendix A we compare results of InCA, Open-InCA and query-only-training in class-incremental learning (CIL). In Tab. 6 of the Appendix we observe that even learning just the query and head parameters is capable of harnessing the large dimensional representation maps $f_{k}(x)$ + +Layer branching candidate selection The cross-attention adapters can be applied in parallel over any intermediate layer of the network and we observe that the performance of many tasks hinges on identifying the right intermediate layer to use for that task. When considering intermediate activations $f_{k}(x)$ , we observe that + +- Using activations such that $f_{j}(x)$ is directly computed from a residual connection yields better adapter accuracy. This reflects that network representations are refined through each residual block. +- The middle and later layers of the network provide stronger input representations for the adapter. This is likely since the representations of the early layers do not have discriminative enough features to be used directly for high-level tasks. + +Two-Stage training In settings where the base-model forward-propagation during InCA training is too constraining, one may conduct training in two stages. In the first stage, save the activations that serve the input for the adapter for the entire training set, by running the base-model inference for a single epoch. After saving, the second stage proceeds by training the adapters for $T$ epochs with loaded activations. Suppose the per-epoch cost of the pre-trained model forward-propagation is $C_{PT}$ and the per-epoch cost of adapter optimization is $C_A$ , then two-stage training reduces the time of training from $O((C_{PT} + C_A) \times T)$ to $O(C_A \times T + C_{PT})$ , where $C_{PT} \gg C_A$ . With two-stage training, we are able to reduce a 30-epoch adapter training job to 30 seconds for a cached Stanf. Cars dataset ( $\sim 8,000$ training samples). We speculate that further optimization can reduce training costs to "real-time", enabling an array of user-interactive applications. + +# 4 Experiments + +Datasets In our experiments, we measure the capabilities of InCA on a diverse set of 11 fine-grained datasets consisting of: CUB-200 [73], Aircrafts, [54], Stanford Cars [40], Stanford Dogs [35], Oxford Flowers 102 [56], MIT-67 [60], Oxford Pets [59], Describable Textures (DTD) [13], European Flood [5], FGVC Herbarium [57], and EuroSAT Land Use dataset [26]. In Table 4 we explore InCA in the settings of multi-task learning and evaluate it on the ImageNet-to-Sketch benchmark that is comprised of 5 datasets: WikiArt [64], Oxford Flowers [59], Sketch [71], Stanford Cars [40], and CUB-200 [73]. + +Baselines For downstream transfer experiments, we compare InCA adaptation to other adaptation protocols. 1) paragon: Full fine-tuning is considered as the paragon as it performs well on a diverse set of datasets but incurs steep computational and parameter costs. 2) We compare InCA to other parameter efficient approaches, including 2a) LoRA [28] 2b) Visual Prompt Tuning (VPT) [34] where we apply the top-performing VPT approach, VPT-Deep. 2c) BitFit [6] and 2d) AdaLN [45] which is the LayerNorm analogous, AdaBN approach. Note, each approach we name in 2) requires backpropagating through the entire network's activations to update the learnable parameters and leads to a large computational overhead compared with InCA. 3) In addition, we also compare InCA with a suite of computationally efficient approaches that avoid backbone backpropagation like InCA. These include 3a) Linear Probing (LP), 3b) Intermediate Linear Probing (In. LP) which utilizes the same training procedure as InCA but with a LP classifier on the activations. 3c) MLP-3, which is a feed-forward network that consists of probing the base-model with a 3-layer feed-forward network, and 3d) Intermediate MLP-3 (In. MLP-3), the extension of MLP-3 to intermediate layers. + +Training details In all of our results (including multi-task settings) we use the same training configuration for InCA. We only change the adapter architecture input layer to automatically match the dimension of the base-model activation map. InCA is robust to hyper-parameters and our training schedule is consistent for all runs. This amounts to 30 training epochs, AdamW optimizer [52] with 2 learning rates, and cosine annealing learning schedule; we provide the full training details + +
Top-1 Test Error, ViT-L/16
DatasetFull FTInCAInCA (last)In. LPLPIn. MLP-3MLP-3VPT [34]LoRA [28]AdaLN[45]BitFit [6]
CUB-2009.18.79.416.216.213.913.910.412.715.615.4
DTD18.217.218.418.920.617.420.121.419.422.221.9
Flood Depth18.917.119.617.822.817.620.119.019.618.718.7
EuroSAT1.01.21.92.13.71.52.51.10.91.51.4
Aircrafts14.915.621.950.667.436.847.421.716.628.527.2
Herbarium18.821.124.632.639.829.536.421.419.227.928.3
MIT-6710.49.09.09.710.510.111.214.814.815.115.1
Oxford Flowers0.60.30.40.61.10.50.72.24.07.07.2
Oxford Pets4.24.04.26.16.45.35.56.94.35.55.3
Stanf. Cars8.17.710.229.247.220.831.49.28.416.014.7
Stanf. Dogs5.95.45.85.35.35.75.77.34.33.83.7
Mean Top-1 Test Error (Max. gap to Full FT)10.0 (0.0)9.8 (-2.3)11.5 (-7.0)17.2 (-35.7)21.9 (-52.5)14.5 (-21.9)17.7 (-32.5)12.3 (-6.8)11.3 (-4.4)14.7 (-13.6)14.4 (-12.3)
% Trainable param.100%1.3%1.3%0.1%0.1%2.8%2.8%0.8%2.4%0.1%0.1%
No backbone backprop.XXXXX
+ +in Appendix F. While InCA is efficient enough to operate at larger resolutions, we use 224 image resolution unless stated otherwise. Nonetheless, InCA performance improves at 384 resolution while remaining computationally competitive (see Table 2). + +Transfer learning on ViT In Table 1, we demonstrate the transfer performance of InCA applied to ViT-L/16. For each dataset we train InCA and extract activations at residual layers of the ViT for the last 12 blocks and output layer. For all baselines and our method we use the ViT DeiT pre-training [67] and additionally report ViT-L/16 pre-training results in Appendix Table 8. In the table, we compare InCA to full fine-tuning as well as applying InCA on the last layer and observe that only InCA is capable of achieving good results on challenging datasets such as Aircraft, Stanf. Cars, etc. and closes the maximal gap to full fine-tuning to $-2.3\%$ . The second best adaptation approach is LoRA which achieves a maximum gap of $-4.4\%$ to full fine-tuning, yet at additional training costs. For a single dataset we can train the InCA modules with 2 learning rates in parallel which corresponds to 26 InCA modules with identical architectures attached to 13 activation maps. In this case, the total training costs of InCA on a single dataset correspond to one base-model run. In Appendix C, we report the hyper-parameter settings and training cost of InCA and current state of the art adaptation method for transformers, VPT [34] which incurs up to $8.7\times$ the training costs of InCA with a large hyper-parameter search (2 vs. 24 settings). + +Transfer learning on SWIN In Table 3 we present downstream adaptation results for the SWIN-L pre-trained model. InCA adaptation is applied to the 3rd and 4th stages of the network residual activations. Because of the heterogeneous activation dimensions of the hierarchical SWIN architecture, + +Table 1: Fine-grained Classification Top-1 Test Error (ViT-L/16) We compare InCA to full fine-tuning (Full FT) along with other adaptation approaches for downstream learning. For each method we summarize the maximum gap in performance compared with the full fine-tuning paragon. In addition, we report the parameter efficiency and whether the method requires backpropagation through the pre-trained model. The minimum error over the columns excluding Full FT is presented in bold. + +
CategoryArchitectureMean Top-1 Test Error (Max. gap to full FT)
Pretraining dataFull FTInCAInCA (last)inter. LPLPModel size
VanillaViT-B/16 [18]In21K13.0 (0)15.9 (-7.6)17.5 (-16.4)23.9 (-32.4)24.3 (-32.4)86.5M
TransformerViT-B/16 [44]ALBEF (CC14M)13.8 (0)13.5 (-4.2)14.8 (-9.3)24.7 (-42.6)25.8 (-42.6)85.9M
ViT-L/16 [67]In21K (DeiT)10.0 (0)9.8 (-2.3)11.5 (-7)17.2 (-35.7)21.9 (-52.5)304.3M
ViT-L/16 @384 [67]In21K (DeiT)\( -\dagger \)9.2 (-0.6 \( \dagger \))11.7 (-9.1 \( \dagger \))17.3 (-38.1 \( \dagger \))22.0 (-54.4 \( \dagger \))304.7M
CLIP-ViT-L/14@336 [61]400M Im-Text\( -\dagger \)9.210.619.621.8304.2M
ViT-H/14 [18]2B Im-Text\( -\dagger \)9.410.414.015.2632.8M
ViT-G/14 [31]2B Im-Text\( -\dagger \)9.610.415.316.81884.9M
Hier. TransformerSWIN-L [48]In21K9.3 (0)10 (-3.6)12.4 (-9.5)15.8 (-31.3)18.3 (-40.5)196.5M
ConvolutionalConvNext-B [49]In21K9.4 (0)10.7 (-7.4)12.5 (-12.6)19.1 (-44.2)19.4 (-44.2)88.5M
ResNext-101 [74]IG-3.5B [53]11.4 (0)12 (-8.7)17.3 (-27.1)20.1 (-38.8)21.3 (-39.7)468.5M
+ +Table 2: Mean Top-1 Test Error for transfer learning with a variety of ViT, SWIN, and convolutional networks, including different network scales and pre-training strategies. Averages are reported on the 11 datasets presented in Table $1.\dagger$ indicates Full FT was avoided due to prohibitive computational costs. For DeiT ViT-L/16 @384 the gap is computed with respect to the 224 pre-training. + +the reported adaptation model sizes depend on the activation map used for the selected adapter. InCA achieves the smallest maximum gap to full fine-tuning on SWIN while being computationally efficient. As with ViT-L, in SWIN we observe that challenging datasets require using intermediate activation maps for InCA, closing the maximal gap from $(-9.5\%)$ to $(-3.6\%)$ . + +Evaluating InCA on different pre-trained models InCA can be applied to any feed-forward neural network without any implementation changes. We simply specify the intermediate layer names and feature tensor-ordering for any new architecture and InCA can be used directly. We note this is in sharp contrast to methods that rely on specific layers such as convolution filters [7, 55] or self-attention [28, 34, 43]. We illustrate the architecture versatility of our method in Table 2. We report the mean and maximum test error gap from full fine-tuning on the 11 fine-grained dataset suite as studied in Table 1. We test different architecture families, which include vanilla vision transformers i.e. ViTs, SWIN [48], and modern convolutional networks (ConvNext [49], ResNext [74]). In addition, we test models pre-trained via different strategies including supervised learning [18, 67] and vision-language objectives [44, 61, 31]. We also test InCA at different ViT scales from ViT/B-16 (86M) to ViT/G-14 (1.8B). For InCA adaptation, all model sizes were trained on a single V100 GPU with batch size 32, including for the larger input resolutions runs. + +Multi-task Experiments InCA's isolated design is suitable for multi-task inference and a single pre-trained-model can efficiently evaluate a batch of samples on multiple tasks, allowing for "one-to-many" inference. We compare InCA on the ImageNet-to-Sketch multi-task benchmark in Table 4. All methods except $\mathrm{BA}^2$ were trained with a ViT-L/16 model and evaluated with the ImageNet-to-Sketch version of each dataset [55]. For $\mathrm{BA}^2$ [7], we report the adaptation on a ResNet-50 [25] backbone, as the $\mathrm{BA}^2$ approach requires convolutional filters. Overall, InCA is the top performing method reaching near the paragon on the evaluated datasets. Importantly for multi-task, only InCA and LP enable multi-task inference via "computation sharing" of the base model inference. + +Learning efficiency Isolating the learning from the base-model means InCA learns shallow neural networks directly on a downstream task. By avoiding deeply backpropagated gradients through the base model, the adapters receive direct signal which improves the optimization dynamics and speed of training. We compare the number of training steps required to train InCA and VPT-Deep and observe that InCA can be optimized in $4.5 \times$ fewer epochs than VPT. Here we don't take into account the additional GPU memory costs of optimizing VPT in each step, nor the required hyper-parameter sweeps used in VPT. More detailed efficiency comparison results are given in Appendix C. + +
Top-1 Test Error, SWIN-L
DatasetFull FTInCAInCA (last)In. LPLPIn. MLP-3MLP-3LoRA[28]AdaLN[45]BitFit[6]
CUB-2009.09.19.610.210.69.79.710.09.18.8
DTD15.617.819.117.719.116.716.715.816.717.0
Flood Depth17.616.318.318.518.516.718.517.116.917.8
EuroSAT0.71.52.42.73.71.62.20.91.11.7
Aircrafts12.215.825.343.552.733.734.816.122.726.5
Herbarium14.918.223.029.234.024.927.618.421.229.7
MIT-6710.510.110.19.910.310.210.29.68.98.5
Oxford Flowers0.50.30.40.50.50.50.50.40.40.4
Oxford Pets4.64.75.55.05.55.55.55.24.84.9
Stanf. Cars7.38.415.029.239.022.4269.614.218.4
Stanf. Dogs9.18.18.17.17.19.89.811.39.19.0
Mean Top-1 Test Error (Max. gap to Full FT)9.3 (0)10.0 (-3.6)12.4 (-9.5)15.8 (-31.3)18.3 (-40.5)13.8 (-21.5)14.7 (-22.6)10.4 (-3.9)11.4 (-10.5)13.0 (-14.8)
% Trainable param.\$100%3.7%3.7%0.1%0.1%2.8%2.8%0.8%0.1%0.1%
No backbone backprop.
+ +Table 3: Fine-grained Classification Top-1 Test Error (SWIN-L) We compare InCA to full fine-tuning (Full FT) along with other adaptation approaches for downstream learning. For each method we summarize the maximum gap in performance compared with the full fine-tuning paragon. In addition we report the parameter efficiency and whether the method requires backpropagation through the pre-trained model. $\S$ For SWIN-L different activations map sizes leads to different $\%$ of trainable parameters and we report the maximum for each method. The minimum error over the columns excluding Full FT is in bold. + +
Top-1 Test ErrorAdaptation Efficiency
MethodAvg.FlowersWikiArtSketchCarsCUB-200# of trainable parametersGPU Memory (training)Inference Time (for all 5 tasks)
Full fine-tuning10.50.614.714.410.812.2
Linear probing29.810.937.229.344.527.90.01×0.17×1.01×
BA2[7]15.94.327.720.77.918.81.03×
TAPS [70]10.40.615.814.011.110.44.12×1.23×
SpotTune [22]14.33.724.219.87.616.05.27×7.3×
InCA9.80.315.416.87.78.80.06×0.51×1.13×
+ +Table 4: Multitask Efficiency and Top-1 Test Error on "ImageNet-to-Sketch" benchmark. InCA is the top performing method on average and is parameter efficient. Further, only InCA and linear probing "share computation" of the pre-trained model and enable "one-to-many" inference execution measured in the "Inference Time" column. BA $^2$ is based on ResNet-50 and can not be applied to ViTs. The rest of the methods are based on ViT-L/16. + +# 5 Analysis + +We analyze the results of InCA adaptation, focusing on the performance signature of different intermediate representations used as input for the adapter and the relation between the top InCA layers with fine-tuning. Further in Appendix D, we provide a theoretical proof motivating the extraction capabilities of cross-attention as it is used in InCA. + +Intermediate representations We consider the intermediate representation signature created by evaluating the accuracy of adapters that utilize different layers. In Figure 3, we review the adapter performance applied to different layer representations. Datasets like CUB-200 and Flood-Depth mostly prefer final representations, whereas for datasets like Aircrafts and Stanf. Cars, the best adaptations use earlier representations with decreasing performance towards the last activations. Curiously, we observe consistency in layer affinity for certain datasets while using different pretrainings for the backbone and even when using different architectures (Appendix Fig. 5). + +InCA and partial-tuning In Appendix B we compare InCA with gradually un-freezing the base-model and applying partial fine-tuning on a growing set of layers. We run a set of experiments where we fine-tune a pre-trained model starting at different freezing points, this means we optimize all layers of the network after the freezing point location. For each dataset we construct a "partial tuning curve" where we plot the final test accuracy vs. freezing point (Figure 4). Interestingly, we observe a direct correlation between the layer-location of the top InCA adapter and the point where the partial tuning curve saturates. In particular, the partial tuning test accuracy plateaus (to the tuning of more + +![](images/71d5ca5c7dbb733bef1363968bbb658f4eafa075cd4457190202c3f3e028083c.jpg) +Figure 3: InCA Layer Performance Signature Relative test error improvement of InCA adapters attached to different intermediate layers. We evaluate InCA with ViT-L/16 with adapters at each residual block starting from Block 11. + +![](images/a7493b30e9af53b369e2415873c461ba44d0c881da18e59d595da12746e3052c.jpg) +Figure 4: Partial Fine-tuning vs. InCA Vertical dashed lines indicate the top InCA layer; curves show final test accuracy for different partial tuning training runs. Each mark indicates a run where all of the pre-trained model parameters are trained up to a "freeze point" in the network's layers. Note partial tuning performance saturates in close proximity to the optimal InCA adapter layer. This is aligned with our hypothesis that full fine-tuning attempts to surface existing representations already in the network. In that case, performance improves until the tuning approach unlocks the capacity to utilize an existing relevant representation and performance plateaus afterwards. Note here we refer to output layers, e.g., the adapter at block 19 means the adapter corresponding to the final output of block 19, or the input to block 20. + +layers) at around the same layer location as the top performing InCA adapter layer location. Namely, the point of saturation of the partial tuning curve is where partial-tuning is capable of harnessing the representation found by InCA at that layer. This gives further evidence that "your representations are in the network" and that fine-tuning surfaces existing representations that can be directly identified by InCA. However, InCA adaptation operates an order of magnitude more efficiently and scales better to large models. + +# 6 Discussion + +In this paper, we present an efficient and effective alternative to full fine-tuning for transfer learning, closing the gap to full fine-tuning on a diverse set of downstream datasets. InCA has many benefits: it inherently generalizes to different architectures, efficiently scales to massive models, optimizes effectively, and unlocks modular and flexible adaptation applications including multi-task and incremental learning. Further, through the parallel exhaustive search of InCA we are able to better understand the inner representation dynamics of neural networks and construct illuminating "representation signatures" of different models and datasets. + +# References + +[1] Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C. Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. 3, 4 +[2] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017. 4 +[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, + +Andrew Zisserman, and Karén Simonyan. Flamingo: a visual language model for few-shot learning. In NeurIPS, 2022. 4 +[4] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 5 +[5] Björn Barz, Kai Schröter, Moritz München, Bin Yang, Andrea Unger, Doris Dransch, and Joachim Denzler. Enhancing flood impact analysis using interactive retrieval of social media images. Archives of Data Science, Series A (Online First), 5(1):A06, 21 S. online, 2018. 6 +[6] Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1-9, Dublin, Ireland, May 2022. Association for Computational Linguistics. 4, 6, 7, 8, 13 +[7] Rodrigo Ferreira Berriel, Stéphane Lathuilière, Moin Nabi, Tassilo Klein, Thiago Oliveira-Santos, Nicu Sebe, and Elisa Ricci. Budget-aware adapters for multi-domain learning. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 382-391. IEEE, 2019. 8, 9 +[8] Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyls, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. Improving language models by retrieving from trillions of tokens. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2206-2240. PMLR, 17-23 Jul 2022. 4 +[9] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European conference on computer vision, pages 213-229. Springer, 2020. 4 +[10] Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-Geom. In International Conference on Learning Representations, 2019. 1, 2 +[11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 4 +[12] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. Advances in neural information processing systems, 28, 2015. 4 +[13] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014. 6 +[14] John D Cook. Upper and lower bounds for the normal distribution function. John D Cook's Blog. 11 +[15] Aditya Deshpande, Alessandro Achille, Avinash Ravichandran, Hao Li, Luca Zancato, Charless Fowlkes, Rahul Bhotika, Stefano Soatto, and Pietro Perona. A linearized framework and a new benchmark for model selection for fine-tuning. arXiv preprint arXiv:2102.00084, 2021. 4 +[16] Chaitanya Devaguptapu, Samarth Sinha, K J Joseph, Vineeth N Balasubramanian, and Animesh Garg. $\delta$ -patching: A framework for rapid adaptation of pre-trained convolutional networks without base performance loss, 2023. 4 +[17] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics, 2019. 4 +[18] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. 3, 7, 8 + +[19] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126–1135. PMLR, 2017. 4 +[20] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7036-7045, 2019. 4 +[21] Raphael Gontijo-Lopes, Yann Dauphin, and Ekin Dogus Cubuk. No one representation to rule them all: Overlapping features of training methods. In International Conference on Learning Representations, 2022. 4, 11 +[22] Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, and Rogerio Feris. Spottune: transfer learning through adaptive fine-tuning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4805-4814, 2019. 4, 9 +[23] Bharath Hariharan, Pablo Arbeláez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 447-456, 2015. 4 +[24] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Towards a unified view of parameter-efficient transfer learning. In International Conference on Learning Representations, 2022. 4, 1 +[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society, 2016. 8 +[26] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019. 6 +[27] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR, 2019. 3, 4 +[28] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. 3, 4, 6, 7, 8, 13 +[29] Vladimir Iglovikov and Alexey Shvets. Ternausnet: U-net with vgg11 encoder pre-trained onImagenet for image segmentation. arXiv preprint arXiv:1801.05746, 2018. 4 +[30] Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, and Ludwig Schmidt. Patching open-vocabulary models by interpolating weights, 2022. 4 +[31] Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, July 2021. 7, 8 +[32] Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver: General perception with iterative attention. In International conference on machine learning, pages 4651-4664. PMLR, 2021. 4 +[33] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021. 5 +[34] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In European Conference on Computer Vision (ECCV), 2022. 3, 4, 6, 7, 8, 5 +[35] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. 6 +[36] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526, 2017. 2 + +[37] Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In European conference on computer vision, pages 491-507. Springer, 2020. 3 +[38] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2661-2671, 2019. 4 +[39] Zhi Kou, Kaichao You, Mingsheng Long, and Jianmin Wang. Stochastic normalization. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 16304-16314. Curran Associates, Inc., 2020. 4 +[40] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013. 6 +[41] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning, pages 3744-3753. PMLR, 2019. 4 +[42] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10657-10665, 2019. 4 +[43] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. 4, 8 +[44] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705, 2021. 4, 7, 8 +[45] Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017. 3, 4, 6, 7, 8, 13 +[46] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017. 1, 2 +[47] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 4 +[48] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021. 3, 7, 8 +[49] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 11966-11976, 2022. 3, 4, 7, 8 +[50] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 12 +[51] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017. 12 +[52] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. 6 +[53] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised pretraining. In Proceedings of the European conference on computer vision (ECCV), pages 181-196, 2018. 7 +[54] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013. 6 + +[55] Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV), pages 67-82, 2018. 8 +[56] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics and Image Processing, Dec 2008. 6 +[57] New York Botanical Garden (NYBG). Fgvc 7 - herbarium 2020, 2020. 6 +[58] Michal Pandy, Andrea Agostinelli, Jasper Uijlings, Vittorio Ferrari, and Thomas Mensink. Transferability estimation using bhattacharyya class separability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9172-9182, June 2022. 4 +[59] Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. 6 +[60] Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. In 2009 IEEE conference on computer vision and pattern recognition, pages 413-420. IEEE, 2009. 6 +[61] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. 4, 7, 8 +[62] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. 12 +[63] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-16. IEEE, 2020. 4 +[64] Babak Saleh and Ahmed Elgammal. Large-scale classification of fine-art paintings: Learning the right metric on the right feature. International Journal for Digital Art History, (2), Oct. 2016. 6 +[65] Mark Sandler, Andrey Zhmoginov, Max Vlademyrov, and Andrew Jackson. Fine-tuning image transformers using learnable memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12155-12164, 2022. 4 +[66] Zhiqiang Shen, Zechun Liu, Jie Qin, Marios Savvides, and Kwang-Ting Cheng. Partial is better than all: Revisiting fine-tuning strategy for few-shot learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9594-9602, 2021. 3 +[67] Hugo Touvron, Matthieu Cord, and Hervé Jégou. Deit iii: Revenge of the vit. In Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, page 516–533, Berlin, Heidelberg, 2022. Springer-Verlag. 7, 8 +[68] Cristina Vasconcelos, Vighnesh Birodkar, and Vincent Dumoulin. Proper reuse of image classification features improves object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13628-13637, 2022. 4 +[69] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 4 +[70] Matthew Wallingford, Hao Li, Alessandro Achille, Avinash Ravichandran, Charless Fowlkes, Rahul Bhotika, and Stefano Soatto. Task adaptive parameter sharing for multi-task learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7561-7570, 2022. 4, 9 +[71] Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems, pages 10506-10518, 2019. 6 +[72] Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139-149, 2022. 1, 2 +[73] Peter Welinder, Steve Branson, Takeshi Mita, Catherine Wah, Florian Schroff, Serge Belongie, and Pietro Perona. Caltech-ucsd birds 200. 2010. 6 + +[74] Saining Xie, Ross Girshick, Piotr Dólar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 3, 7, 8 +[75] Li Yang, Adnan Siraj Rakin, and Deliang Fan. Rep-net: Efficient on-device learning via feature reprogramming. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12277-12286, 2022. 4 +[76] Kaichao You, Zhi Kou, Mingsheng Long, and Jianmin Wang. Co-tuning for transfer learning. Advances in Neural Information Processing Systems, 33:17236-17246, 2020. 4 +[77] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research, 2022. 4 +[78] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer, 2014. 4 +[79] Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. Side-tuning: a baseline for network adaptation via additive side networks. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, pages 698–714. Springer, 2020. 4 +[80] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable $\{\mathrm{detr}\}$ : Deformable transformers for end-to-end object detection. In International Conference on Learning Representations, 2021. 4 + +# Appendix + +Below we provide additional details and results which are not presented in the main manuscript. + +# A Continual Learning with Open-InCA + +With the Open-InCA adapter, each class prediction is isolated using a different dedicated query and classifier vector. For continual learning tasks, in addition to running multiple adapters in parallel as presented for multi-task results in Table 4, Open-InCA enables an even more granular composition of adapter sub-tasks. Recall the Open-InCA adapter architecture is defined as + +$$ +\left[ v _ {\text {c r o s s}} ^ {1} (\mathbf {z}), \dots , v _ {\text {c r o s s}} ^ {c} (\mathbf {z}) \right] := \operatorname {c r o s s - a t t n} _ {\theta} \left(\left[ z ^ {1}, \dots , z ^ {T} \right], \left[ q _ {1}, \dots , q _ {c} \right]\right) +$$ + +$$ +\operatorname {O p e n - I n C A} _ {\theta} (\mathbf {z}) := \operatorname {d i a g - h e a d} _ {\theta} \circ \mathrm {L N} \left(\left[ v _ {\text {c r o s s}} ^ {1} (\mathbf {z}), \dots , v _ {\text {c r o s s}} ^ {c} (\mathbf {z}) \right]\right) +$$ + +with LN denoting LayerNorm. Due to the properties of each operator, each class prediction can be computed separately as + +$$ +\operatorname {O p e n - I n C A} (\mathbf {z}) _ {i} = \langle W _ {i}, \operatorname {L N} (\operatorname {c r o s s - a t t n} _ {\theta} ([ z ^ {1}, \dots , z ^ {T} ], [ q _ {i} ])) \rangle . +$$ + +Because of this property we can remove a class prediction or add a new class prediction without any effects on other model predictions (as long as the parameters of cross-attn and LN remain fixed). As presented in Sec. 4 we use "query-only-training" which trains new adapter classes while freezing cross-attn, LN and enabling compatibility between task predictions. + +When training with "query-only-training" the softmax function, $\mathrm{softmax}(u) = \frac{\exp(u^k)}{\sum_{i=1}^{c}\exp(u^i)}$ , indirectly injects information of predictions from all classes due the normalization term in the denominator, which means gradients about a particular class $i$ will include information from other classes $j$ . Instead, we can achieve complete training separation by using a Sigmoid final activation, $\sigma(u) = \frac{\exp(u)}{\exp(u) + 1}$ , and a Binary Cross Entropy (BCE) loss that considers each prediction separately. Clearly in "query-only-training" the adapter representation capacity is reduced, since the cross-attention weights are not trained. We present an experiment evaluating the performance of InCA, Open-InCA and "query-only-training" Open-InCA in Table 6 and observe that despite the isolated and reduced parameter set in query-only-training of Open-InCA, the method is still competitive and outperforms Linear Probing on most datasets. + +Next we test Open-InCA for class-incremental learning, for which we consider the Split CIFAR-100 incremental learning benchmark. The Split CIFAR-100 dataset is trained with 10 incremental learning episodes each introducing 10 new classes. As in [10], we present the average episode accuracy and forgetting of "query-only-training" Open-InCa and additional baselines. + +In particular we evaluate Open-InCa using a ViT-B/16 along with state of the art methods L2P [72], LwF [46] and EWC [24]. Nonetheless, we do not apply any special routing of our learned episodic models and simply combine their predictions. In contrast L2P is a prompt based approach that, during inference, passes each new sample to an auxiliary classifier to predict its corresponding episode (in this case a 10-way classifier) and the corresponding episode model is up-weighted according to the prediction. We believe that with such an auxiliary classifier Open-InCA performance can significantly improve, nonetheless we observe that Open-InCa can simply leverage a larger model efficiently to achieve state of the art accuracy. We leave routing of samples to different learned sub-models as an interesting avenue for future work. + +In addition, Open-InCA has additional benefits as compared to typical class-incremental learning approaches: + +- Flexible incrementation With Open-InCA different episodes can naturally contain a variable number of classes and episodes can be further decomposed if needed. This is since one can modify the model at the granularity of a single-class predictor via the Open-InCA adapter architecture by introducing or removing additional $q_{i}$ and $W_{i}$ . +- Reduced forgetting risk With Open-InCA the ability of adding new classes without forgetting is built-in into the architecture, as prediction of different classes ensures that the + +previous class predictions remain the same (i.e., no logit regression) which reduces catastrophic forgetting. + +- Parameter and computation efficient The Open-InCA adapter benefits from the InCA approach, which is parameter efficient and computationally efficient during inference (see Table 4) as well as during training (see Fig. 6 for comparison with prompts). + +
MethodAverage Accuracy (↑)Forgetting (↓)
LP-sequential*17.759.1
Full-FT-sequential*33.686.9
EWC [36]47.033.3
LwF [46]60.727.8
L2P [72]83.87.6
Open-InCA (ViT-B/16)83.09.1
Open-InCA (ViT-L/16)88.37.1
Open-InCA (ViT-H/14)86.18.2
+ +Table 5: CIFAR-100 Class-Incremental Learning Split CIFAR-100 is trained with 10 episodes of 10 classes in the standard CIL evaluation suite [72]. Average accuracy and forgetting is reported over the 10 episodes according with [10]. *Sequential fine-tuning results are taken from [72]. + +
Top-1 Test Error
DatasetInCAOpen-InCAQuery only Open-InCAIn. LP
CUB-2009.19.512.116.2
DTD17.817.119.218.9
Aircrafts15.818.138.650.6
MIT-6710.19.49.19.7
Oxford Flowers0.30.40.40.6
Oxford Pets4.74.05.46.1
Stanf. Cars8.48.422.829.2
Stanf. Dogs8.15.75.35.3
Average9.39.114.117.1
+ +Table 6: Open-InCA adapter performance We compare InCA, Open-InCA, "querytraining" Open-InCA and Intermediate Linear Probing (In. LP). We observe that Open-InCA is comparable with InCA and that "querytraining" significantly out-performs In. LP. + +# B Intermediate Representation Signatures + +The parallel training of InCA results in the synthesis of tens of models that can run inference with insignificant per-adapter marginal costs. As a result we have the ability to glean highly useful information about the network's different representations and study the network inner representations effectively. This is especially important for recent non-convolutional based architectures that do not have as many inductive biases explaining some of their behavior. In this section we present results showing information we retrieve from the performance of InCA adapters. + +![](images/729c1999f19940f1e3c1d03f7fcb44a63786d9269f67ff1912a0ae3eeea84c85.jpg) +Figure 4: (repeated) Partial fine-tuning vs. InCA Vertical dashed lines indicate the top InCA layer; curves show final test accuracy for different partial tuning training runs. Each mark indicates a run where all of the pre-trained model parameters are trained up to a "freeze point" in the network's layers. Note partial tuning performance saturates in close proximity to the optimal InCA adapter layer. This is aligned with our hypothesis that full fine-tuning attempts to surface existing representations already in the network. In that case, performance improves until the tuning approach unlocks the capacity to utilize an existing relevant representation and performance plateaus afterwards. + +# B.1 Partial fine-tuning and adapter performance + +Below we present in detail the experiments discussed in Sec. 5, in particular regarding partial fine-tuning and InCA. The experiments illustrate the relationship between InCA adapters at different layers with partial tuning. We tune the pre-trained model starting from different "freezing points". In particular for neural network $f(x) = g_{1} \circ \ldots \circ g_{l}$ , for each freezing point $g_{m}$ we consider its position $m$ and all of the preceding layers and apply layer freezing to $g_{1}, \ldots, g_{m-1}$ (i.e. not updating gradients for those layers). Back-propagation is then only applied for optimizing $g_{m}, \ldots, g_{l}$ including the network's prediction "head". In Figure 4 we show the dynamics of partial tuning, where we optimize the pre-trained network (ViT-L/16) in different runs with each run having a different freezing point. We compile the final Top-1 Test accuracy of each freezing run to create a partial tuning "curve" for a single dataset. We compare the partial tuning performance curve of each dataset with the corresponding top layer of the InCA adapter trained on that dataset and observe that they are highly aligned, with datasets that prefer later InCA layers plateauing in their test accuracy earlier (at a later freezing point). In particular what we observe is that partial tuning performance plateaus roughly at the same layer where InCA identifies the top adapter representation. This is also the point at which partial fine-tuning is capable of harnessing that representation for the downstream task. Overall this gives further evidence that "your representations are in the network" and fine-tuning simply surfaces existing representations that are already identified by InCA. When drawing the vertical lines of the top InCA adapters, we refer to output layers, e.g., the adapter at block 19 means the adapter corresponding to the final output of block 19, or the first input of block 20. + +# B.2 Task Layer Affinities + +In InCA we select top-performing adapters that "listen" to different intermediate representations of a neural network. In our work we observe that one is able to achieve strong and diverse transfer learning by utilizing intermediate representations, and that for challenging tasks it is often required to use intermediate representations to achieve top results. Indeed the best representation layer for an adapter tends to be highly robust to hyper-parameter variables of the optimization. Even more intriguingly, we find that this representation affinity is preserved across different pre-trainings and even architectures. This is, certain tasks have a strong "affinity" to a certain range of representation layers even for different architectural circumstances. The majority of the architectures we consider have some pre-trained component on one of the ImageNet datasets (aside from the CLIP ViT-L/14 model). At the same time, the fact that different architectures give rise to similarly helpful representations gives strong clues about the effect of different architectures as compared with the pre-training task during learning over a large diverse dataset such as ImageNet. + +![](images/f6e56288200acfe9147ac0b66e7a59936bc29d3e2b937b1bec72c547cb3a5605.jpg) +Figure 5: Best-performing representation for InCA adapter for Aircraft (Top-left) MIT-67 (Top-right) and Stanf. Cars (Bottom). + +In detail, we look at the best-performing InCA adapter for a fixed task on different architectures. The pre-trained models we consider consist of 2 different pre-trainings of the ViT-L/16 architecture (ViT- original and DeiT), the 384-resolution pre-training of DeiT with the resolution adjusted ViT-L/16, CLIP's ViT-L/14 architecture, the SWIN-L architecture, and the convolutional based ConvNext-Base architecture. All of the vanilla ViTs we consider each have 24 residual transformer blocks so that comparing between blocks is directly aligned. SWIN-L and ConvNext follow the "Stage" breakdown of blocks, namely SWIN-L has (2,2,18,2) stage breakdown that conveniently also adds up to 24 blocks (hence aligned in the figure) and lastly, ConvNext follows a $(3 - 3 - 26 - 3) +$ head stage block composition, which we rescale in the figures to fit on the same 24 block range. In addition in the plots we also present the test error gap of each architecture with using the InCA adapter applied on its final block representation. Tasks that prefer earlier layers such as Stanf. Cars and Aircraft have a large gap from the performance of the last layer representation adaptation and such later layers lead to sub-optimal results. + +We remark that the work of InCA sheds light on the inner representations learned by neural networks showing in some aspects performance is invariant to the architecture and more based on the pretraining dataset. We leave this topic for further research and find it to be an intriguing topic of study. + +# C Efficiency Results + +InCA is highly efficient especially for large models, which is based on the isolated adapter architecture that does not modify the backbone. We delineate the efficiency aspects as follows: + +- Training memory efficiency The use of a frozen pre-trained model makes the training much more efficient and scalable since not all of the intermediate computations need to be stored as done in standard training or as required by methods that compute gradient information using inner-layers of the network. As soon as any intermediate layer requires a gradient, all subsequent activations must be held in GPU memory after the forward pass. This means methods like LoRA, FitBit and VPT all require storing of all of the activation maps for all of the layer operations in the network since they update parameters based on gradients from the very early layers in the network. +- Fast optimization Unlike typical parameter efficient methods that insert some form of trainable parameters in the network, InCA adapters are trained with "direct gradient" information coming from an isolated loss. Essentially each adapter corresponds to a very shallow neural network trained directly via back-propagation. This makes the training dynamics fast as direct gradient information about the loss easily reaches all of the adapter parameters. On the other hand, to update inserted parameters in the backbone, the gradient information is indirect and needs to be back-propagated through the backbone, with the risk of information loss and making the optimization more challenging, as we and the authors [34] observe regarding prompt tuning. +- Efficient multi-task inference As we present in Table 4 the unchanged backbone execution enables efficient and parallel inference efficiency as multiple tasks can be evaluated at once. + +# C.1 Computational Efficiency of InCA Compared with VPT + +In Table 7 we observe that InCA is an order of magnitude more efficient to train than VPT. For the results in the table we consider the VPT-Deep adaptation method, that is trained with 50 prompt tokens in each layer. We report calculated training times in GPU-hours of a standard Nvidia-T4 GPU using a ViT-L/16 architecture and accuracy numbers based on the datasets of Table 1 with the DeiT pre-training. For larger architectures such as ViT-H/14 ("ViT Huge") the difference in training-time is even more striking, as InCA maintains good per-run training time of 2.5, VPT-Deep requires staggering 55.8 GPU-hours per-run for a single GPU. On ViT-H/14 this is exacerbated as we must reduce the batch-size of VPT significantly to fit training on a common-place single GPU (Nvidia-T4). We measure in terms of training InCA and VPT-Deep for the same number of epochs. This however, is inaccurate as InCA trains an order of magnitude faster on a per epoch basis (see Figure 6). + +
MethodMean Test Err.Max. Full-FT gapTraining time per run (GPU hrs.)# Hparam. per datasetTrain time per dataset (GPU hrs.)
InCA10.22.42.02 (parallel)4.0 (2.4*)
VPT Deep [34]12.36.85.824139.6
+ +Table 7: Computation costs of adaptation We adapt ViT-L/16 to CUB-200 downstream classification with the same number of training epochs. We evaluate the training and computational costs of a single run and training VPT-Deep and InCA for one training dataset. *Training with 2 learning rates in parallel leads to training time decreasing from 4.0 to 2.4 GPU-hours. + +We attribute the difference in training time of InCA to: + +1. InCA does not require back-propagation through the whole model which gives $\sim 50\%$ speed improvement alone. +2. InCA is robust to hyper-parameters and we optimize it using just 2 learning rates, compared with the hyper-parameter set of VPT (in our experiments we use 24 hyper-parameter configurations per dataset while using the full configuration presented in [33] takes even longer). In addition, with "one-to-many" training, we train the two hyperparameters of InCA in parallel and report the specific training time in Table 7 denoted by $(^{*})$ for parallel hyper-parameter training. +3. InCA does not increase the number of propagated tokens in the transformer (e.g. in VPT with 100 propagated tokens the attention matrix doubles, from $\sim 4\times 10^4$ to $\sim 9\times 10^4$ entries). + +# C.2 Optimization dynamics of InCA and VPT + +In Fig. 6 we conduct an experiment where we train InCA and state of the art prompting method VPT-Deep [34] for different numbers of epochs and report the final test accuracy. We observe that InCA trains order of magnitude faster than prompting and reaches within $95\%$ relative test accuracy after 3 training epochs. + +![](images/d9fc00b07cd072f889f7219c88acff41213fbb04497c1d78bfb4526085f70bf0.jpg) +Figure 6: Optimization Speed for training InCA and prompt tuning (VPT-Deep) on the Aircrafts dataset. We train each method until completion with varying numbers of epochs and report the relative final test accuracy to 50 epoch training. The shallow adapter architecture and direct gradient signal in InCA makes the training of the adapter an order of magnitude faster (in terms of gradient updates) than prompt tuning approaches. Both methods use batch-size 32 and take the same number of gradient steps in each corresponding run, under the optimal learning rate. + +# D Theoretical Analysis + +Empirically, we consistently observe that cross-attention as opposed to a linear or MLP-3 architecture enables InCA to better harness the existing model. We present a theoretical result asserting that using the cross-attention layer for aggregation as opposed to linear averaging, or even full-concatenation followed by a large dimensional linear layer is capable of learning over a strictly broader set of data distributions. + +We give the precise statement in Theorem D.1 and intuitively argue that cross-attention with learned queries has the ability to sift through irrelevant pieces of the representation that may be at variable positions in different data samples. + +Recall in the settings considered thus far, the extracted activation of an image data-point can be viewed as $\mathbf{x}_i\in \mathbb{R}^{d\times T}$ or $T$ tokens e.g. $\mathbf{x}_i = [x_i^1,x_i^2\dots x_i^T ]$ , with $x_{i}^{j}\in \mathbb{R}^{d}$ . We argue that in many scenarios, task-pertinent information is a property of individual tokens (e.g. $x_{i}^{j}$ ) within a data-point $\mathbf{x}_i$ and not a property of the overall feature map. We present the theorem below. To this end we define a Token-Separability (TS) notion of a dataset. + +Definition 1 (Token-separable Dataset). A dataset $\mathcal{D} = \{(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_n, y_n)\}$ with $\mathbf{x}_i = [x_i^1, x_i^2, \ldots, x_i^T] \in \mathbb{R}^{d \times T}$ and $y_i \in \{-1, 1\}$ is said to be linearly-token-separable if there exists a scalar $c > 0$ and $w \in \mathbb{R}^d$ satisfying $\| w \|_2 = 1$ , such that for each data point $(\mathbf{x}_i, y_i) \in \mathcal{D}$ there exists a token $x_i^{j_i} \in \mathbf{x}_i$ with + +$$ +y _ {i} \left(\left\langle x _ {i} ^ {j _ {i}}, w \right\rangle + b\right) \geq c. \tag {1} +$$ + +We define $(w_{\mathcal{D}},b_{\mathcal{D}})$ and $c_{0}$ as the maximum margin solution and maximum margin respectively, i.e. $c_{0} = \max_{\{\| w\| = 1,b\in \mathbb{R}\}}\min_{\mathcal{D}}\{y_{i}(\langle x_{i}^{j_{i}},w\rangle +b)\}$ for $\mathcal{D}$ with $(w_{\mathcal{D}},b_{\mathcal{D}})$ corresponding to the selected $c_{0}$ . + +Intuitively, $\mathcal{D}$ is a TS-dataset if each of its data points contain a token that leads to linear separability (the same $w$ shared among all points $\mathbf{x}_i\in \mathcal{D}$ ). One can further distinguish between aligned-TS Datasets, where the index $j_{i}$ of the linearly separating token is consistent among the $n$ data points, or permutable-TS where $j$ is dependent on $i$ . Further TS datasets can be generalized to $k$ -token separable datasets where $k$ tokens are responsible for separability in each $\mathbf{x}_i$ , for this theoretical contribution we don't make an assumption on whether the dataset is aligned or permuted, but consider the setting provided by Definition 1 (i.e. not the $k$ -separable generalization). We present an analytical statement for the advantage of cross-attn, the theorem is provided for binary classification via a scalar prediction, but can conventionally extend to $C$ -class classification. For binary classification we define a prediction via the standard scalar binary aggregator as $\sigma (u) = \mathrm{sign}(\sum_i u_i)$ that converts a vector into a binary prediction. + +Theorem D.1. Let $\mathcal{D}$ be a binary-class, token-separable dataset with max-margin $c_{\mathcal{D}}$ and max-margin solution $(w_{\mathcal{D}}, b_{\mathcal{D}})$ consisting of $n$ data points. Suppose that $\mathcal{D}$ is distributed such that for $\mathbf{x}_i = (x_i^1, \ldots, x_i^T)$ with $(\mathbf{x}_i, y_i) \in \mathcal{D}$ are normalized for separating token $x_i^{j_i} \in \mathcal{D}$ and that the rest of the tokens correspond to "noise", $x_i^k \sim N(0, I / d)$ , $\mathbb{E}\| x_i^k\|_2^2 = 1$ . Furthermore assume + +$$ +c _ {\mathcal {D}} \geq \max \left(\sqrt {\frac {3 2}{d} \left(\log (1 / \delta) + \log (2 n T)\right)}, 2 | b _ {\mathcal {D}} |\right). \tag {2} +$$ + +Then there exists a cross-attention classifier + +$$ +f (x; q, \{W \}, b) = \sigma \left(\sum_ {l = 1} ^ {T} \operatorname {c r o s s - a t t n} (\mathbf {x}, q) _ {l} + b\right) \tag {3} +$$ + +that separates $\mathcal{D}$ with probability at least $1 - \delta$ . In contrast, every fixed member $g(\mathbf{x};w,b) = \sigma\left(\sum_{l=1}^{T} \mathbf{x}_i \cdot w + b\right)$ of the linear classifier family will fail to separate $\mathcal{D}$ with probability at least $\frac{1}{\sqrt{2\pi}} \frac{s}{s^2 + 1} \exp(-s^2 / 2)$ where $s = \frac{\sqrt{d}}{\sqrt{T-1}} c_{\mathcal{D}}$ . + +As stated above, the failure probability of the simple linear classifier $g$ depends on $s$ which satisfies $s \sim \sqrt{d / T}$ . For existing architectures $d, T$ tend to have a similar order of magnitudes, e.g. for ViT-B/16, $d = 768$ , $T = 196$ which makes the failure probability non-negligible. Before presenting the proof, we make the following observation: in InCA, we use the same cross-attn layer with latent $q$ , which we show can be simplified via reparameterization. + +Observation D.2 (Query Collapse Reparameterization). A single-head cross-attention parameterization with latent $[q]$ is equivalent to the following simplified layer: cross-attn( $\mathbf{x}, q$ ) = $\sum \text{softmax}(q^*\mathbf{x}) \odot \mathbf{W}\mathbf{x}$ with $q^* \in \mathbb{R}^d$ . + +This can be derived by decomposing the attention score which is the input to softmax. + +$$ +a _ {j} = \left\langle \mathbf {W} _ {q} q, \mathbf {W} _ {k} x ^ {j} \right\rangle = \left(\mathbf {W} _ {q} q\right) ^ {\top} \left(\mathbf {W} _ {k} x ^ {j}\right) = +$$ + +$$ +q ^ {\top} \mathbf {W} _ {q} ^ {\top} \mathbf {W} _ {k} x ^ {j} = (q ^ {*}) ^ {\top} x ^ {j}. +$$ + +Where $q^{*} = q^{\top}\mathbf{W}_{q}^{\top}\mathbf{W}_{k}$ and $q^{*}\in \mathbb{R}^{d}$ , hence the cross-attn layer simplifies, which is used in the proof. + +As our proof shows, the cross-attn layer can operate on a large data bandwidth, e.g. $\mathbf{x} \in \mathbb{R}^{d \times T}$ while still being selective in finding task specific representations. Empirically we also observe that increasing the number of heads of cross-attn also improves the performance of the InCA. This is in part because it enables the learned latent query parameter $q$ to identify more useful token patterns, and since $q$ is fixed using more heads remain stable (as opposed to when $q$ is a data input). We now present the proof of the claim. + +# Proof of Theorem D.1 + +Proof. The proof of the theorem has two parts A) the positive condition on the cross-attn layer and B) the negative condition on the linear layer (a non-separability probability lower bound). We start with A) and consider separability of positive and negative data samples in turn. First we simplify and write an equivalent cross-attention binary classifier expression for the cross-attn classifier. + +Positive result for the cross-attention model We consider a "single-head" cross-attention layer and by Observation D.2 we can write the cross-attn layer as follows + +$$ +\operatorname {c r o s s - a t t n} \left(\mathbf {x} _ {i}; q, \left\{\mathbf {W} \right\}\right) = \sum_ {j = 1} ^ {T} \operatorname {s o f t m a x} \left(\left\langle x _ {i} ^ {j}, q ^ {*} \right\rangle\right) \cdot \mathbf {W} _ {v} x _ {i} ^ {j}. \tag {4} +$$ + +Note that softmax is a function of the entire vector $\{\langle x_i^j,q^*\rangle \}_{j\in [1,T]}$ , however we write it in the form above to illustrate the summed terms. For simplicity of notation, we drop the asterisk and write $q^{*}\in \mathbb{R}^{d}$ as $q$ . Combining cross-attn with the binary aggregator, we have aggregation over the output vector of the cross-attn layer. + +$$ +\begin{array}{l} f (\mathbf {x} _ {i}; q, \{W \}, b) \\ = \sigma \Bigg(\sum_{l = 1}^{d}\big(\operatorname {cross - attn}(\mathbf{x}_{i};q,\{W\})\big)_{l} + b\Bigg). \\ \end{array} +$$ + +Define $S(\mathbf{x}_i,q)\in \mathbb{R}^{1\times T}$ as the computed softmax argument, + +$$ +S \left(\mathbf {x} _ {i}, q\right) = S = \operatorname {s o f t m a x} \left(\left[ \left\langle x _ {i} ^ {1}, q \right\rangle , \dots \left\langle x _ {i} ^ {T}, q \right\rangle \right]\right). \tag {5} +$$ + +Substituting into the classifier, we have + +$$ +\begin{array}{l} f (\mathbf {x} _ {i}; q, \{\mathbf {W} \}, b) = \sigma \left(\sum_ {l = 1} ^ {d} \left(\sum_ {j = 1} ^ {T} S _ {j} \cdot \mathbf {W} _ {v} x _ {i} ^ {j}\right) _ {l} + b\right) \\ = \sigma \left(\sum_ {j = 1} ^ {T} S _ {j} \sum_ {l = 1} ^ {d} \left(\mathbf {W} _ {v} x _ {i} ^ {j}\right) _ {l} + b\right). \\ \end{array} +$$ + +Let $u = \sum_{l=1}^{d}(\mathbf{W}_v)_{[l,:]}$ be the sum of the rows of $\mathbf{W}_v$ . Note $x_i^j$ can be pulled out from the inner summation to give + +$$ +f (\mathbf {x}; q, u, b) = \sigma \left(\sum_ {j = 1} ^ {T} S _ {j} \langle u, x _ {i} ^ {j} \rangle + b\right). +$$ + +Thus the cross-attn classifier presented is equivalent to the parameterization above. Next we consider the two terms in the sum, namely $S_{j}$ and $\langle u,x_i^j\rangle$ . We will be deriving their distribution in the case where a data point $(\mathbf{x}_i,y_i)$ , has prediction labels $y_{i} = 1$ and $y_{i} = -1$ separately. We start with $y_{i} = 1$ and consider + +$$ +S _ {k} = \frac {\exp \left(\langle x _ {i} ^ {k} , q \rangle\right)}{\sum_ {j = 1} ^ {T} \exp \left(\langle x _ {i} ^ {j} , q \rangle\right)}. \tag {6} +$$ + +Take $j_{i}$ to be the separating token for sample $\mathbf{x}_i$ . By the assumption of the theorem for $k \neq j_{i}$ the tokens correspond to isotropic noise of expected squared norm 1, i.e. $\mathbf{x}_i^k \sim N(0, I / d)$ . For a fixed $u \in \mathbb{R}^d$ with $\| u \|_2 = 1$ , we take $\eta_k$ to be the distribution of the dot product, + +$$ +\eta_ {k} = \langle u, \mathbf {x} _ {i} ^ {k} \rangle = \sum_ {l} (u) _ {l} \cdot \left(\mathbf {x} _ {i} ^ {k}\right) _ {l}. \tag {7} +$$ + +For $k \neq j_{i}$ this is a sum of independent Gaussians and each coordinate is distributed as $\sim N(0, \frac{u_i^2}{d})$ . As such we have + +$$ +\begin{array}{l} \langle u, \mathbf {x} _ {i} ^ {k} \rangle \sim N \left(0, \sum_ {l} \frac {1}{d} \cdot u _ {l} ^ {2}\right) = N (0, \| u \| _ {2} ^ {2} / d) \\ = N (0, 1 / d) \\ \end{array} +$$ + +since $\| u\| _2 = 1$ . Next we consider $k = j_{i}$ . By the hypothesis we have that + +$$ +y _ {i} \left(\left\langle \mathbf {x} _ {i} ^ {j _ {i}}, w _ {\mathcal {D}} \right\rangle + b _ {\mathcal {D}}\right) \geq c _ {\mathcal {D}}. +$$ + +With positive label $(y_{i} = 1)$ this gives $\langle \mathbf{x}_i^{j_i},w_{\mathcal{D}}\rangle +b_{\mathcal{D}}\geq c_{\mathcal{D}}$ . Note that since $c_{\mathcal{D}}\geq 2|b_{\mathcal{D}}|$ we have that $\langle \mathbf{x}_i^{j_i},w_{\mathcal{D}}\rangle \geq c_{\mathcal{D}} / 2$ . Since $w_{\mathcal{D}}$ is the maximal margin solution, we have $\| w_{\mathcal{D}}\| = 1$ and $c_{\mathcal{D}} > 0$ Take $q$ to be of the form $q = t\cdot w_{\mathcal{D}}$ for $t\in \mathbb{R}^+$ and $u = w_{\mathcal{D}}$ , then + +$$ +t \cdot \eta_ {j _ {i}} = \left\langle x _ {i} ^ {j _ {i}}, q \right\rangle = t \left\langle x _ {i} ^ {j _ {i}}, w _ {\mathcal {D}} \right\rangle \geq t \cdot c _ {\mathcal {D}} / 2. \tag {8} +$$ + +For $\eta_{k}$ , $k \neq j_{i}$ , separating $t$ , we have that $\langle x_i^k, q \rangle = t \cdot \eta_k$ . Define $M = \max_{k \neq j_i}(|\eta_k|)$ . $M$ is a random variable distributed as the maximum of $T - 1$ i.i.d. Gaussians distributed according to $N(0,1/d)$ . We bound $M$ by investigating an upper bound of the Gaussian CDF. Recall that the moment generating function of a Gaussian random variable $X \sim N(0,1)$ is given by $M_X(r) = \mathbb{E}[e^{rX}] = e^{\frac{1}{2} r^2}$ . Then note that for any $s > 0$ we have + +$$ +\mathbb {P} (X \geq r) = \mathbb {P} \left(e ^ {s X} \geq e ^ {s r}\right) \leq e ^ {- s r} M (s) = e ^ {- s r + \frac {1}{2} s ^ {2}} +$$ + +where the inequality is an application Markov's inequality. Setting $s = r$ this gives the tail bound + +$$ +\mathbb {P} (X \geq r) \leq \exp (- r ^ {2} / 2). \tag {9} +$$ + +For our settings with $\eta_k\sim N(0,1 / d)$ + +$$ +\mathbb {P} \left(\eta_ {k} \geq r\right) \leq \exp \left(- d r ^ {2} / 2\right). \tag {10} +$$ + +For a two-sided bound, by symmetry of the distribution we have + +$$ +\mathbb {P} \left(\left| \eta_ {k} \right| \geq r\right) \leq 2 \exp \left(- d r ^ {2} / 2\right). \tag {11} +$$ + +Therefore a union bound results in + +$$ +\begin{array}{l} \mathbb {P} (M \geq r) = \mathbb {P} \left(\bigcup_ {k \neq j _ {i}} | \eta_ {k} | \geq r\right) \\ \leq \sum_ {k \neq j _ {i}} \mathbb {P} (| \eta_ {k} | \geq r) \\ \leq (T - 1) \cdot 2 \exp (- d r ^ {2} / 2) \\ \leq 2 T \exp (- d r ^ {2} / 2). \\ \end{array} +$$ + +We can bound the bulk of the distribution of $M$ as + +$$ +\mathbb {P} (M < r) \geq 1 - 2 T \exp (- d r ^ {2} / 2). \tag {12} +$$ + +Taking $r = c_{\mathcal{D}} / 4$ , then with probability at least $1 - 2T\exp (-d(c_{\mathcal{D}} / 4)^2 /2) = 1 - 2T\exp (-d(c_{\mathcal{D}})^2 /32)$ we have + +$$ +M = \max _ {k \neq j _ {i}} | \eta_ {k} | < \frac {c _ {\mathcal {D}}}{4} \tag {13} +$$ + +and thus + +$$ +\max _ {k \neq j _ {i}} (\langle q, x _ {i} ^ {k} \rangle) < \frac {t c _ {\mathcal {D}}}{4}. \tag {14} +$$ + +With high probability, Eq. (13) holds. This implies that for $j_{i}$ , + +$$ +\begin{array}{l} S _ {j _ {i}} = \frac {\exp (\langle x _ {i} ^ {j _ {i}} , q \rangle)}{\sum_ {j = 1} ^ {T} \exp (\langle x _ {i} ^ {j} , q \rangle)} \\ = \frac {1}{1 + \sum_ {j \neq j _ {i}} ^ {T} \exp (\langle x _ {i} ^ {j} , q \rangle - \langle x _ {i} ^ {j _ {i}} , q \rangle))} \\ \geq \frac {1}{1 + \sum_ {j \neq j _ {i}} ^ {T} \exp (\langle x _ {i} ^ {j} , q \rangle - t c _ {\mathcal {D}} / 2))} \\ \geq \frac {1}{1 + \sum_ {j \neq j _ {i}} ^ {T} \exp (- t c _ {\mathcal {D}} / 4))} \\ = \frac {1}{1 + (T - 1) \exp (- t c _ {\mathcal {D}} / 4))} \\ = 1 - \frac {(T - 1) \exp (- t c _ {\mathcal {D}} / 4))}{1 + (T - 1) \exp (- t c _ {\mathcal {D}} / 4))}. \\ \end{array} +$$ + +Note that $c_{\mathcal{D}} > 0$ and $T$ is fixed. Nonetheless, the probability bound is independent of $t$ which may take arbitrarily values, e.g. for any $\epsilon > 0$ , take $t = 4 / c_{\mathcal{D}} \log (T / \epsilon)$ , which gives + +$$ +S _ {j i} \geq 1 - \epsilon . \tag {15} +$$ + +Since $S_{k}\geq 0$ for each $k$ and $\sum_{k = 1}^{T}S_{k} = 1$ we have for $k\neq j_{i}$ + +$$ +S _ {k} \leq \sum_ {j \neq j _ {i}} S _ {j} = 1 - S _ {j _ {i}} \leq \epsilon . \tag {16} +$$ + +We consider the classifier prediction + +$$ +f (\mathbf {x}; q, u) = \sigma \left(\sum_ {j = 1} ^ {T} S _ {j} \langle u, x _ {i} ^ {j} \rangle + b\right) \tag {17} +$$ + +where $b$ is a bias parameter we can choose. Recall $u = w_{\mathcal{D}}$ . Focusing on the inside of the sign function + +$$ +\begin{array}{l} \sum_ {j = 1} ^ {T} S _ {j} \langle w _ {\mathcal {D}}, x _ {i} ^ {j} \rangle = S _ {j _ {i}} \langle w _ {\mathcal {D}}, x _ {i} ^ {j _ {i}} \rangle + \sum_ {k \neq j _ {i}} S _ {k} \eta_ {k} \\ \geq S _ {j _ {i}} \left\langle w _ {\mathcal {D}}, x _ {t} ^ {j _ {i}} \right\rangle - \sum_ {k \neq j _ {i}} S _ {k} \max _ {j \neq j _ {i}} (| \eta_ {j} |) \\ \geq (1 - \epsilon) c _ {\mathcal {D}} / 2 - \epsilon \cdot c _ {\mathcal {D}} / 4 \\ = (1 - (3 / 2) \epsilon) c _ {\mathcal {D}} / 2 > \frac {c _ {\mathcal {D}}}{4} \\ \end{array} +$$ + +provided that $\epsilon < 1/3$ . If we take $b = -\frac{c_{\mathcal{D}}}{4}$ we have that for $q = tw_{\mathcal{D}}$ and $u = w_{\mathcal{D}}$ + +$$ +f \left(\mathbf {x} _ {i}; q, u, b\right) = \sigma \left(\sum_ {j = 1} ^ {T} S _ {j} \langle w _ {\mathcal {D}}, x _ {i} ^ {j} \rangle + b\right) = 1 = y _ {i}. \tag {18} +$$ + +Next we address the case where $y_{i} = -1$ . We consider the classifier prediction + +$$ +f (\mathbf {x}; q, u, b) = \sigma \left(\sum_ {j = 1} ^ {T} S _ {j} \langle u, x _ {i} ^ {j} \rangle + b\right) \tag {19} +$$ + +with $u = w_{\mathcal{D}}$ . Again for $k \neq j_i$ we have that + +$$ +\max _ {k \neq j _ {i}} \left(\left\langle u, x _ {i} ^ {k} \right\rangle\right) < c _ {\mathcal {D}} / 4. \tag {20} +$$ + +On the other hand for $j_{i},y_{i} = -1$ we have that + +$$ +\begin{array}{l} y _ {i} \left(\left\langle x _ {i} ^ {j _ {i}}, w _ {\mathcal {D}} \right\rangle + b _ {\mathcal {D}}\right) \geq c _ {\mathcal {D}} \\ \Longrightarrow \langle x _ {i} ^ {j _ {i}}, w _ {\mathcal {D}} \rangle + b _ {\mathcal {D}} \leq - c _ {\mathcal {D}} \\ \Longrightarrow \langle x _ {i} ^ {j _ {i}}, w _ {\mathcal {D}} \rangle \leq - c _ {\mathcal {D}} / 2 < 0 \\ \end{array} +$$ + +where we have used the hypothesis that $c_{\mathcal{D}} \geq 2|b_{\mathcal{D}}|$ in the last line. We consider the term inside the classifier. We note that + +$$ +\begin{array}{l} \sum_ {k = 1} ^ {T} S _ {k} \langle u, x _ {i} ^ {k} \rangle < \sum_ {k \neq j _ {i}} ^ {T} S _ {k} \langle u, x _ {i} ^ {k} \rangle \\ < \sum_ {k \neq j _ {i}} ^ {T} S _ {k} \cdot (c _ {\mathcal {D}}) / 4 \\ \leq c _ {\mathcal {D}} / 4 \\ \end{array} +$$ + +where in the first inequality we have used the fact that $S_{j_i}\langle u,x_i^{j_i}\rangle < 0$ and in the last inequality we have used the fact that $\sum_{j = 1}^{T}S_{j} = 1$ . Therefore again with bias term $b = -c_{\mathcal{D}} / 4$ we have that + +$$ +\sum_ {j = 1} ^ {T} S _ {j} \left\langle u, x _ {i} ^ {j} \right\rangle + b < 0 \tag {21} +$$ + +and $f(\mathbf{x}_i; q, u, b) = -1 = y_i$ . Thus we have just shown that with probability $1 - 2T\exp(-d(c_{\mathcal{D}})^2/32)$ the model $f(x; q, u, b)$ with $u = w_{\mathcal{D}}, q = tw_{\mathcal{D}}$ , $b = -c_{\mathcal{D}}/4$ gives the correct label for $\mathbf{x}_i$ . Taking the union bound over all $n$ points in $\mathcal{D}$ we get with probability at least $1 - 2Tn\exp(-d(c_{\mathcal{D}})^2/32) \geq 1 - \delta$ the model $f(x; q, u, b)$ with $u = w_{\mathcal{D}}, q = tw_{\mathcal{D}}$ , $b = -c_{\mathcal{D}}/4$ separates $\mathcal{D}$ . + +Negative result for the linear model We consider the linear classifier + +$$ +g (\mathbf {x}; w, b) = \sigma \left(b + \sum_ {j = 1} ^ {T} \langle w, x ^ {j} \rangle\right) \tag {22} +$$ + +where $w \in \mathbb{R}^d$ is restricted to have unit norm $\| w \| = 1$ . For an input $\mathbf{x}_i$ under the aggregation, the term inside the sign function simplifies to + +$$ +\sum_ {j = 1} ^ {T} \langle w, x _ {i} ^ {j} \rangle = \langle w, \sum_ {j = 1} ^ {T} x _ {i} ^ {j} \rangle . +$$ + +We recall that the $x_{i}^{k}$ for $k \neq j_{i}$ are distributed according to $N(0, \frac{1}{d} I)$ . Thus we have that + +$$ +\alpha_ {i} := \sum_ {k \neq j _ {i}} x _ {i} ^ {k} \sim N \left(0, \frac {T - 1}{d}\right). \tag {23} +$$ + +So the problem of classification is equivalent to learning a linear classifier over the separating tokens under the presence of Gaussian noise with distribution $N(0, \frac{T - 1}{d})$ . Let $i^{*}$ be the index corresponding to the input $\mathbf{x}_{i^{*}}$ with smallest margin, i.e. + +$$ +i ^ {*} = \operatorname * {a r g m i n} _ {i} y _ {i} (\langle w, x _ {i} ^ {j _ {i}} \rangle + b _ {\mathcal {D}}). +$$ + +Then we have that $y_{i^*}(\langle w, x_{i^*}^{j_{i^*}} \rangle + b_{\mathcal{D}}) \leq c_{\mathcal{D}}$ . We note that any linear classifier $g(\mathbf{x}; w, b)$ with $\| w \|_2 = 1$ will fail to classify $\mathcal{D}$ whenever $y_{i^*} \alpha_{i^*} < -c_{\mathcal{D}}$ . Thus we will lower bound the probability of $\mathbb{P}(y_{i^*} \alpha_{i^*} < -c_{\mathcal{D}})$ . Note for a standard Gaussian random variable $\eta \sim N(0, 1)$ as shown in [14] we have for $r > 0$ + +$$ +\mathbb {P} (\eta > r) \geq \frac {1}{\sqrt {2 \pi}} \frac {r}{r ^ {2} + 1} \exp (- r ^ {2} / 2). \tag {24} +$$ + +Set $s = \frac{\sqrt{d}}{\sqrt{T - 1}} c_{\mathcal{D}}$ . Then by symmetry of the Gaussian distribution the above bound translates into the following bound for $\alpha_{i*}$ + +$$ +\mathbb {P} \left(y _ {i ^ {*}} \alpha_ {i ^ {*}} < - c _ {\mathcal {D}}\right) \geq \frac {1}{\sqrt {2 \pi}} \frac {s}{s ^ {2} + 1} \exp \left(- s ^ {2} / 2\right). +$$ + +It follows that for any $w \in \mathbb{R}^d$ such that $\| w \|_2 = 1$ that the linear classifier $g(\mathbf{x}; w, b)$ incorrectly classifies $\mathcal{D}$ with probability at least + +$$ +\frac {1}{\sqrt {2 \pi}} \frac {s}{s ^ {2} + 1} \exp (- s ^ {2} / 2). +$$ + +This completes the second part of the proof. + +![](images/a00d10339e413115dac069156d8eedc4dbe60dc9bf380ed1319a828670562c05.jpg) + +# E Further Results + +We present additional experiments below. In Subsection E.1 we present per-dataset results for additional architectures and a discussion about ensembling InCA is given in Subsection E.2. + +# E.1 Per-dataset results for different architectures as presented in Table 2 + +Table 8 provides per-dataset results that are presented in aggregate in Table 2. Below we present the results for ConvNext-Base and ViT-L/16 (original pre-training) pre-trained models (with the results for ViT-L/16 DeiT and SWIN-L presented in Table 1 and Table 3 respectively). + +# E.2 Ensembling learned adapters + +Because of "one-to-many" inference of InCA can take a set of independently learned adapters and ensemble them without a marginal increase to the inference cost. We follow non-parametric equalweight ensembling, by taking the output predictions of two adapters $h_1(x), h_2(x)$ on a sample image $x$ . Note that the adapters are computed with their relevant representations via a single forward pass, which makes the execution of $h_1(x)$ and $h_2(x)$ together only incrementally higher than computing just $h_1(x)$ . The ensemble is defined as + +$$ +h ^ {*} (x) = \frac {h _ {1} (x) + h _ {2} (x)}{2}. \tag {25} +$$ + +Given the large combinatorial selection of $k$ adapters from the $l$ learned adapters we consider the case of ensembling with just two adapter members. After training we evaluate all $m(m - 1) / 2$ such pairs and compare them with the top performing single layer predictor which we present in Figure 7. In the figure, we illustrate the representations and corresponding adapter pairs that lead to best performance and also present the computed ensemble gain which is the difference between the ensembled model accuracy and the top accuracy of any single adapter. + +In addition to improving classification accuracy, ensembling can aid in improving robustness and out of distribution performance which we leave as a future work. Further directions of ensembling include ensembling performance when using adapters of different adapter architectures (e.g. an MLP-3 ensembled with an InCA adapter) or adapters that use representations from different neural networks [21]. + +# E.3 Ablation on the number of queries + +We apply an ablation to see the effects of using a different number of queries in the InCA adapter architecture. In particular, the InCA adapter is written as, + +$$ +v _ {\text {c r o s s}} (\mathbf {z}) _ {[ 1: m ]} := \text {c r o s s - a t t n} _ {\theta} ([ z ^ {1}, \dots , z ^ {T} ], [ q _ {1}, \dots q _ {m} ]) +$$ + +$$ +\operatorname {I n C A} _ {\theta} (\mathbf {z}) := \operatorname {h e a d} _ {\theta} \circ \operatorname {n o r m} \left(\operatorname {a v g - p o o l} \left(v _ {\text {c r o s s}} (\mathbf {z}) _ {[ 1: m ]}\right)\right). +$$ + +
Top-1 Test Error for ConvNext-B
DatasetFull fine-tuningInCAInCA (last)Inter. LPLP
CUB-2009.39.39.313.013.0
DTD16.717.417.418.618.6
Flood Depth16.916.520.519.419.9
EuroSAT0.91.62.22.83.1
Aircrafts10.517.923.154.754.7
Herbarium17.022.726.437.439.5
MIT-6710.910.310.710.210.4
Oxford Flowers0.50.40.40.60.6
Oxford Pets5.24.65.65.96.0
Stanf. Cars6.89.314.439.939.9
Stanf. Dogs8.97.67.67.37.3
Ave. Top-1 Test Error9.410.7 (-7.4)12.8 (-12.6)19.7 (-44.2)20.0 (-44.2)
+ +
Top-1 Test Error for ViT-L/16 (ViT pre-training)
DatasetFull fine-tuningInCAInCA (last)Inter. LPLP
CUB-20011.710.910.912.212.2
DTD18.318.920.119.920.1
Flood Depth20.818.118.718.718.7
EuroSAT0.81.11.92.53.5
Aircrafts20.723.228.244.546.4
Herbarium20.326.931.338.941.3
MIT-6712.811.311.910.411.1
Oxford Flowers0.60.30.40.30.4
Oxford Pets5.55.35.46.56.5
Stanf. Cars9.310.912.927.630.2
Stanf. Dogs11.010.410.410.110.1
Ave. Top-1 Test Error12.012.5 (-6.6)13.8 (-11.0)17.4 (-23.8)18.2 (-25.7)
+ +Table 8: Per-dataset Adaptation Top-1 Test Error on various architectures We test transfer learning performance of fine-grained datasets applied to different architectures and pre-trainings including, ViTs, SWIN, and convolutional networks. We report the per-dataset Top-1 test error for the 11 datasets presented in Table 2 + +For $m > 1$ the output of tokens $[q_1, \ldots, q_m]$ through the cross-attn layer are averaged, and we test whether using $m > 1$ brings additional representational benefit to each adapter. We present the result in Table 9 and observe that using a different $m$ does not have a consistent effect on the accuracy of the learned adapters, and in our experiments we use $m = 1$ for InCA adapters to be most computationally efficient. + +# F Implementation details + +We present the optimization and augmentation details for training InCA, and note we use standardized procedures for augmentation and training (without extensive hyper-parameter optimization) of the different transfer learning methods we evaluate. + +Augmentation Unless otherwise specified we train with input image size 224 and standard augmentation practice [62]. In particular, during training we resize to image-size 256 and apply random cropping, for testing we apply resizing and center cropping. For larger image resolutions we maintain the same resize-crop ratio of 0.875. + +**Optimization** For the linear probing and InCA approaches, we train with the AdamW optimizer [50], cosine annealing learning rate scheduler [51] for 30 epochs and with weight decay $1\mathrm{e} - 4$ . In each method we sweep over 2 learning rates $\mathrm{lr} = \{1\mathrm{e} - 4,3\mathrm{e} - 4\}$ . For full fine-tuning, we also train with AdamW optimizer (weight decay $1\mathrm{e} - 4$ ), cosine annealing for 30 epochs, but in addition, identify optimal learning rates for each pre-training and architecture separately. We first identify an + +![](images/cdfdf2f994e44bdf59e7620620a5c10f9796dc66a7fb6a855471d7494fb24ea2.jpg) +Figure 7: Optimal Representation pairings Optimal ensemble pairs of InCA of listeners at different locations of the network; Optimal ensembles can improve over any single layer. ViT-L/16 DeiT pre-training. + +
Top-1 Test Error for ViT-L/16 (DeiT pre-training)
# of InCA queries (m)
Dataset12416
CUB-2009.19.59.69.5
DTD17.818.419.219.1
Aircrafts15.819.319.816.8
MIT-6710.110.811.010.9
Oxford Flowers0.30.30.30.4
Oxford Pets4.74.74.54.4
Stanf. Cars8.48.78.88.2
Stanf. Dogs8.16.36.35.9
+ +Table 9: Varying # of queries in the InCA adapter We run an ablation testing the effect of applying a different number of queries $q_{1}, \ldots, q_{m}$ and then averaging when using the InCA adapter. We observe that in most cases $m$ does not have a big effect on accuracy and that $m = 1$ has sufficient representation capacity for the adapter. + +architecture coarse-range learning rate based on performance on 5 datasets by sweeping over $\mathrm{lr} = \{1\mathrm{e} - 2,1\mathrm{e} - 3,1\mathrm{e} - 4,1\mathrm{e} - 5,1\mathrm{e} - 6\}$ followed by a refined sweep with learning rates $\mathrm{lr} = \mathrm{B}$ , 2B with B being the optimal coarse learning rate. + +For the VPT baseline, we follow the details presented in the paper and train with VPT-Deep which was observed to outperforms VPT-Shallow. To train VPT, we use the SGD optimizer with momentum and cosine annealing for 100 epochs. For each dataset we run a sweep on the prompt length $\{5,20,100\}$ , base learning rate $\{0.25,0.1,0.05,0.01\}$ , and weight-decay $\{1e - 2,1e - 4\}$ for a total of 24 runs with 100-epochs for each dataset. We compare the training cost of InCA and VPT-Deep in Table 7. In general we note that the shallow and small architecture of InCA or linear probing that are separate from the base model makes them straightforward to optimize, compared with adaptation methods that receive back-propagated gradients from a frozen intermediate layer of the network as shown in Fig. 2. + +For the LoRA baseline [28] we apply a LoRA modified attention to each block's self-attention layer $(W_{k}, W_{q}, W_{v})$ in ViT based architectures and to each block's WindowAttention for SWIN. For the low rank dimension we sweep over the best value among $d = 5, 10, 50$ . For BitFit we follow the discussion in [6] and train all of the bias-parameters in the network in addition to full training of the head. Analogously for [45] we follow their procedure with LayerNorm which includes training each of the LayerNorm parameters $(\gamma, \beta)$ for each layer along with training of the head of the pre-trained model. For all of the efficient training methods above we sweep over $\mathrm{lr} = \{3\mathrm{e} - 5, 1\mathrm{e} - 4, 3\mathrm{e} - 4, 1\mathrm{e} - 3\}$ to identify the best learning rate for the dataset. + +Broader Impacts Our method, InCA enables efficient and modular model adaptation that can be applied to any strong available pre-trained backbone. In that sense, InCA reduces the computational barriers to entry for training and evaluating over a large set of (potentially massive scale) models and optimization settings to identify a model to be used for downstream adaptation. This bridges the gap between cutting edge research in general visual representation learning and specific domain applications, especially since the best performing models are computationally expensive to adapt. Given that InCA operates well on fine-grained visual datasets, this can have positive applications in scientific domains such as medical imaging. In many scientific domains, the available datasets are known to be fine-grained yet also with sparse training data. In addition the ease of use and reduced computational costs associated with downstream adaptation with InCA makes it possible for domain experts without machine learning expertise to use InCA without access to large computational resources. This can enable domain researchers solve their domain problems by leveraging various public pre-trained models to achieve competitive results. \ No newline at end of file diff --git a/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/images.zip b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0952e89b9d8772598604fceffbb177c928a85926 --- /dev/null +++ b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b48aa8019a24020c0bb32a966faca8d3727833eb66640169f369bb850c1f1507 +size 1156022 diff --git a/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/layout.json b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1f3764e0aec2cc678554be3bf364f11cc37f57d7 --- /dev/null +++ b/yourrepresentationsareinthenetworkcomposableandparalleladaptationforlargescalemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df17f36d6af4ca0dddba547a87d8b3ef516784ed3e3e8453a6e2e7ca4ee8ca55 +size 877178 diff --git a/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_content_list.json b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..facc609449c19f0f6e9453d184e146e8e94a670f --- /dev/null +++ b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a41156abe13d581d3217f9a1529a57aefde0bd99549b1340448ae3f0c0ced054 +size 102921 diff --git a/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_model.json b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a303955235676d82f88c24dc643ad2de8ef00245 --- /dev/null +++ b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:555cf52dfe4a487e268442e198da3191ef37e35bcbd3f9e34ddefa5b80512bbb +size 123905 diff --git a/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_origin.pdf b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8898cdaca57e2b844d25c93b07489ee402db4ffb --- /dev/null +++ b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/9b3b7ac9-7958-48f6-a568-c8047c7c0a0d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16858c943d731ab8cbb4758dd2f8adc43ae26ff7b0b0bb13dc4c9f4ae6d12faf +size 315414 diff --git a/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/full.md b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/full.md new file mode 100644 index 0000000000000000000000000000000000000000..39b3377ca4e497ebd0b8d972f98b1e2cc306b724 --- /dev/null +++ b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/full.md @@ -0,0 +1,381 @@ +# YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus + +David Uthus, Garrett Tanzer, Manfred Georg + +Google + +{duthus,gtanzer,mgeorg}@google.com + +# Abstract + +Machine learning for sign languages is bottlenecked by data. In this paper, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube. With $\sim 1000$ hours of videos and $>2500$ unique signers, YouTube-ASL is $\sim 3x$ as large and has $\sim 10x$ as many unique signers as the largest prior ASL dataset. We train baseline models for ASL to English translation on YouTube-ASL and evaluate them on How2Sign, where we achieve a new finetuned state of the art of 12.39 BLEU and, for the first time, report zero-shot results. + +# 1 Introduction + +The primary bottleneck for machine learning research on sign languages is data. As minority languages used by historically marginalized Deaf/Hard of Hearing communities, sign languages lack the plentiful online resources that have facilitated modern machine learning advances [4, 44, 12]. This is compounded by the fact that sign languages have no standardized written form: mining the videos that do exist is more difficult than retrieval for spoken language text. For translation specifically, there is the added problem of finding spoken language captions that are aligned to corresponding sign language content, rather than a voiceover with its own timing. The result is that datasets tend to be constructed by recording new footage in a studio or curating videos from a small number of manually selected content creators, which limits variety. + +In order to address these challenges, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions, primarily intended for ASL to English machine translation. We mined these videos from YouTube using a two-step process: first, we used automatic content-based annotations to identify potentially relevant captioned videos; and second, we used skilled human annotators to filter out videos with poor quality or misaligned captions. The result is a dataset with 984 hours of high-quality captioned video featuring $>2500$ unique signers, which is $\sim 3x$ as large as the largest prior ASL dataset [39] and has $\sim 10x$ as many unique signers as any sign language dataset to date. + +We train simple baseline models for sentence-level ASL to English translation on YouTube-ASL by embedding MediaPipe Holistic landmarks [26, 16] into the T5 language model [33]. Because YouTube videos may be removed over time and therefore cannot form a stable test set—and for comparison to prior work—we evaluate on a standard benchmark, How2Sign [13]. Borrowing from trends in mainstream machine learning [32, 15, 10], we provide not just finetuned but also zero-shot results to test out-of-domain generalization. We achieve a new finetuned state of the art of 12.39 BLEU (vs. the prior SOTA of 8.03 [40]), and for the first time report a zero-shot score, 3.95 BLEU. + +Table 1: Summary statistics for different sign language translation datasets. See Section 3.3 for details on how these statistics were derived for YouTube-ASL. + +
NameLanguageVocab.# Hours# SignersSource
RWTH-PHOENIX-2014T [5]DGS3K119TV
BOBSL [2]BSL77K144739TV
SWISSTXT [7]DSGS-88-TV
VRT-RAW [7]VGT-100-TV
CSL-Daily [47]CSL2K2310Lab
KETI [22]KVK4192814Lab
Public DGS Corpus [18]DGS-50-Lab
SP-10 [42]various17K1479Web
AfriSign [17]various20K152-Web
How2Sign [13]ASL16K7911Lab
OpenASL [39]ASL33K288220Web
YouTube-ASL (ours)ASL60K984>2519Web
+ +We publicly release the YouTube-ASL video IDs. We hope that YouTube-ASL will be useful for general sign language pretraining, as well as downstream tasks such as ASL to English translation and caption alignment—both in the near term to aid in the construction of larger sign language datasets, and eventually to improve accessibility for the Deaf/Hard of Hearing community. + +# 2 Related Work + +In this section, we review prior sign language translation datasets and methods for translation from sign languages to spoken languages. + +# 2.1 Sign Language Translation Datasets + +Table 1 shows statistics on different sign language translation datasets. There are three main sources for sign language data: ad hoc recorded footage, interpreted TV broadcasts, and online video sharing platforms. + +In the first category are datasets that manually recruit signers and record them performing translations of desired phrases, either in a lab setting or with a camera on their personal device. These datasets tend to be small and feature few signers for logistical reasons, and may have exhaustive annotations because the small size of the dataset makes it feasible. This includes datasets such as CSL-Daily [47], with phrases related to daily life in Chinese Sign Language; KETI [22], with phrases related to emergency situations in Korean Sign Language; Public DGS Corpus [18], with elicited dialogues in German Sign Language; and How2Sign [13], with "How To" instructional monologues translated into American Sign Language. + +In the second category are datasets that collate interpreted TV programs from a collaborating national broadcaster. These datasets tend to be larger than newly recorded ones, but often use a small number of non-native interpreters and lack fine-grained caption alignment (because the supervision comes from the spoken language audio track). This includes datasets such as RWTH-PHOENIX-2014 [5], with weather forecasts interpreted into German Sign Language; SWISSTXT [7], with news/weather programs interpreted into Swiss German Sign Language; VRT [7], with news programs interpreted into Flemish Sign Language; and BOBSL [2], with BBC programs in many domains interpreted into British Sign Language. At 1447 hours, BOBSL is the largest sign language translation dataset to date (including the present work), but has only 39 signers and speech-aligned subtitles, vs. YouTube-ASL's $>2519$ signers and sign-aligned captions—though the two datasets are complementary because they are for different languages. + +In the third category are datasets that curate content from online video sharing platforms. In prior sign language translation datasets, this content is drawn from a small number of manually selected channels. + +This includes datasets such as SP-10 [42], with example sentences from an online multilingual sign dictionary; AfriSign [17], with translated Bible passages hosted on the Jehovah's Witnesses website; and OpenASL [39], with videos from three YouTube channels: DailyMoth, Sign1News, and the National Association of the Deaf. OpenASL is the largest prior ASL dataset and closest work to YouTube-ASL: the key difference is that YouTube-ASL is constructed with open-ended mining from automatic tags, rather than manual channel curation. OpenASL is largely a subset of YouTube-ASL, which—by utilizing the long tail of channels—is $\sim 3x$ as large and has $\sim 10x$ as many unique signers. + +There are several datasets for easier tasks than translation, like isolated sign recognition and finger-spelling recognition, that mine from the web by ambiguous means. MS-ASL [21], WLASL [24], ChicagoFSWild [37]/ChicagoFSWild+ [38], CISLR [19], and Indian-SL [35] are word-level datasets mined from YouTube, sign language-targeted sites like ASLU and ASL-LEX, or other unnamed video sharing platforms. These works do not specify how they retrieved their videos, so it is possible that they used a similar automatic tagging approach to YouTube-ASL, albeit on a more limited scale. + +# 2.2 End-to-End Sign Language Translation + +Originally, sign language translation approaches operated on glosses, linguistic annotations that represent individual signs, or cascaded translation through glosses as an intermediate step, like speech to text translation often cascades through speech recognition. More recently, due to a variety of deficiencies in glosses and lack of widespread gloss data, the field has shifted to end-to-end modeling with encoder-decoder Transformers, starting with Camgoz et al. [5]. + +The two main classes of approaches are those that take learned video embeddings as input [6, 40, 28] (via video encoders, primarily I3D [9], pretrained on tasks such as isolated sign recognition), and those that take estimated pose landmarks as input [28] (such as MediaPipe [26] or OpenPose [8]). Some works achieve modest gains given constant data with architectural tweaks like treating different cues in the input video (hands, face) differently [46, 43]. It is unclear to what extent these techniques are necessary or beneficial on larger datasets. Other works seek to benefit from transfer from spoken language or other sign language data [11, 45, 17]. All of these works train and evaluate on splits derived from the same underlying continuous sign language corpus (different datasets across papers), and sometimes multiple such datasets independently in the same paper. In contrast, we train on YouTube-ASL using an uncomplicated approach and evaluate on How2Sign, reporting both finetuned and zero-shot results to get a more robust understanding of our model's state-of-the-art performance. + +# 3 The YouTube-ASL Corpus + +YouTube-ASL is a corpus of American Sign Language (ASL) videos with accompanying English captions drawn from YouTube. Video sharing platforms like YouTube are appealing sources of sign language data because they host swaths of diverse content that are more broadly representative of real world conditions than studio footage is. Of course, much of this data is irrelevant or low-quality, so it is imperative to develop cost-effective ways to sift through it. + +We used a two-step pipeline to construct the corpus: first, retrieval using automatic content-based annotations, and second, filtering by skilled human annotators at a per-video level. This automatic retrieval step represents a departure from prior continuous sign language corpora and brings us closer to mining approaches from mainstream machine learning. + +# 3.1 Automatically Retrieving Candidate Videos + +As described previously in Abu-El-Haija et al. [1], the YouTube video annotation system associates machine-generated tags with each video in the form of Knowledge Graph entities, which are based on the video's metadata, context, and content signals. We retrieved listed public videos tagged as being related to sign language generally or American Sign Language specifically, as of January 2022.3 This automatic tagging step, while having higher recall than prior works, was flawed in that it was not aware of sign language in the video content itself—to be expected due to the limited nature of current sign language processing. This means that, for example, videos in sign language that do not explicitly mention sign language in the content or context were unlikely to be discovered. This failure + +mode was most salient for press conferences with simultaneous interpreters, which tend not to have well-aligned captions anyway. + +Given these retrieved videos, we drilled down on those with user-generated captions—i.e., captions that were manually uploaded rather than automatically derived from speech—because speech-derived captions are not tightly aligned with signed content. As a heuristic filtering step, we automatically removed videos with duration $< 10$ seconds or $>5$ hours, width $< 480$ pixels or height $>360$ pixels, and frame rate $< 15$ fps or $>60$ fps. We arrived at these values through an iterative mining and auditing process, so that we could reduce annotator labor spent on irrelevant videos without excluding too many relevant ones. From inspection, the heuristic excluded a negligible amount of desirable videos. The one class of useful videos one might expect this to exclude, short isolated sign videos as used by MS-ASL [21] and WLASL [24], tends to have the label in the video title or description rather than captions, so removing videos under 10 seconds does not have a substantial impact. Videos that were over 5 hours were often either live interpreted broadcasts (which did not have aligned captions) or not sign language (e.g., corrupted videos or mostly static content for hours on end). + +Finally, we used off-the-shelf person detection tools to exclude videos where none of the captions corresponded to spans with exactly one person present in the video. We limit the scope of our efforts to signing monologues due to the challenges of modeling conversations between multiple signers. + +The result was a list of 88,002 candidate videos that might contain ASL with high-quality captions. + +# 3.2 Identifying High-Quality Videos with Skilled Human Annotators + +While some smaller datasets like How2Sign [13] use annotators to manually align all captions, this becomes prohibitively expensive for larger datasets. For this reason, OpenASL [39] and BOBSL [2] use annotators to correct only their validation and test sets. We take a coarser-grained approach to annotations but apply it to our entire list of 88,002 candidates: we use humans to identify videos that are roughly suitable and include them in our corpus without modification. + +To do so, we hired 3 native ASL signers with English proficiency to serve as annotators. The annotators used a bespoke internal tool that would display a given YouTube video and present label options. In order to save time, the annotators were able to mark that their labels held for an entire channel of videos rather than each video individually. Therefore it is possible that certain videos in the corpus are channel outliers and do not meet quality standards, but generally large channels have consistent quality. Each video was labelled by only one annotator unless they brought it up for wider discussion. + +Through an iterative process involving written instructions, virtual meetings (through an ASL interpreter or signing project members), and escalations by email for edge cases, we aligned on standards for when to accept a video into the corpus. Some of the reasons for exclusion include: the video's captions do not exclusively correspond to signing; the video is in a sign language other than ASL; the video's captions do not correctly translate the ASL; and the captions are poorly aligned4. Notably, in order to increase the size of the corpus, we chose to include videos across all skill levels and signing styles, as long as they were comprehensible to an ASL user and correctly captioned. This variety is beneficial for sign language recognition tasks, where models should be able to understand all signers, but may limit the corpus's usefulness for generation tasks, where consistency and controllability are important. + +The result was a list of 11,093 videos whose captions are generally well-aligned English translations of signed ASL content. + +# 3.3 Corpus Statistics + +The final, human-filtered YouTube-ASL corpus consists of 11,093 ASL videos with 984 total hours of footage. This is $\sim 3\mathrm{x}$ the size of OpenASL [39], the largest prior ASL dataset, but smaller than BOBSL [2], a British Sign Language dataset. See Table 1 for a comparison between the high-level + +Table 2: Statistics on the distribution of captions and videos in the YouTube-ASL corpus. + +
Number of captions610,193
Caption length (Average / 90th percentile, in characters)48.9 / 88.0
Caption length (Average / 90th percentile, in words)8.8 / 16.0
Caption duration (Average / 90th percentile, in seconds)4.8 / 8.76
Video duration (Average / 90th percentile, in seconds)318.95 / 675.80
+ +![](images/418cdd736314903dde76ea6e2bf40239a772d2fc55e4732334177a4f82938897.jpg) +Figure 1: Distribution of caption and video durations. For the video duration graph, we omit 27 videos whose duration exceeds 3600 seconds (between 3610 and 9017 seconds). + +![](images/82f081f4b12b573b39054a6bd8a178d224e223d5f87b1fb32d89280ab195baf2.jpg) + +attributes of YouTube-ASL and prior sign language translation datasets, including total number of hours. + +These videos are paired with 610,193 English captions, with a total duration of 813 hours. See Table 2 for statistics on the distribution of captions, as well as Figure 1 for visualizations. The average caption length (8.8 words) and duration (4.8 seconds) are relatively short, which reflects that sentences may be split across multiple captions. We computed vocabulary size by counting the number of distinct strings between whitespace or punctuation across all captions. It is important to keep in mind that in addition to the signing itself, these videos' captions vary in style, literalness of translation (whether the content was originally produced in ASL and translated, or translated into ASL from these captions), spelling/grammar correctness, and more. This degree of variability is difficult to quantify in comparisons between datasets. + +We use the number of unique channels, 2519, as an approximate lower bound for the number of unique signers in the dataset: some channels may feature many signers, and some signers may appear across multiple channels. Note that with this method, OpenASL [39] would be estimated to have 3 signers, while its authors reached a count of 220 signers using more fine-grained methods. Even this likely underestimate is $\sim 10x$ the count of any individual sign language dataset to date. + +Figure 2 shows the distribution of videos per channel, for channels with at least 20 videos. There are a few channels with many videos—in particular, the two largest channels are the same news channels featured in OpenASL—and then a long tail of channels with fewer videos. This means that the bulk of new footage present in YouTube-ASL but not OpenASL comes from relatively small channels, which helps variety. See Figure 3 for a sense of the distribution of (machine-annotated) topics across videos: they seem more diverse than prior datasets from video sharing platforms but still shaped by typical YouTube use cases, compared to BOBSL's more topic-balanced BBC programming. + +# 4 Baseline Approach + +In order to demonstrate the potential of YouTube-ASL, we consider a simple method for sentence-level machine translation from ASL to English built using off-the-shelf components. We use a deliberately barebones approach to avoid introducing inductive bias that helps in more limited settings but becomes harmful with scale. + +![](images/85e0377f2711781f35641cd169ce720390d5260034e2ed2fa433f8e8b3594e68.jpg) +Figure 2: Distribution of videos per channel for channels with at least 20 videos. + +![](images/870c3ef708ac392fea841f0f398839c51d808f36de6cf93cf3acb60058ee6a33.jpg) +Figure 3: A selection of high-level topics, with the number of YouTube-ASL videos automatically tagged as related to them. Note that a single video can be tagged with more than one topic. + +# 4.1 Preprocessing + +For our target English outputs, we use the raw captions from YouTube-ASL. Each training example is clipped to the boundaries of a single caption. We filter out captions with length $>300$ characters or duration $<200\mathrm{ms}$ or $>60\mathrm{s}$ , which tend to be malformed, and any captions corresponding to video spans where exactly one person is not present. We do not lowercase the captions or apply any other kind of text normalization. + +For our sign language inputs, we use MediaPipe Holistic landmarks [26, 16], rather than raw video. Sign language models that use pose-based inputs have a history of underperforming those that operate on learned video embeddings [21, 27]; it is unclear to what extent this is due to the information bottleneck in the (imperfectly predicted) pose representation, vs. availability of higher quality pretrained video encoders than pretrained pose encoders. Pose inputs offer some benefits like computational efficiency and privacy. + +MediaPipe Holistic is a lightweight model that predicts 532 3D landmarks (in x-, y-, and z- image-space coordinates) for the hands, pose, and face of a single human in video footage. For sign language understanding tasks, many of these landmarks are redundant (high-detail face mesh) or unnecessary (lower body), and add undesirable complexity. We discard all but 85 of these points, selected a priori using domain knowledge about sign languages: + +- For each hand, we use all 21 landmark points. +- For the pose, we use 6 landmark points, for the shoulders, elbows and hips. This discards the lower body and pose landmarks redundant with the hand and face modules. +- For the face, we use 37 landmark points, from the eyes, eyebrows, lips, and face outline. + +![](images/47fbcf365d136232824e14513d3f15c7e44c6dbe7175757def28436009d7f03a.jpg) +Figure 4: Overview of our model pipeline. Starting from an ASL video clip, we use MediaPipe Holistic to compute 3D landmarks for the face, hands, and body of the subject. We then discard irrelevant landmarks and normalize the remainder. These are concatenated and embedded by a linear projection layer into T5.1.1-Base, which then decodes the English translation. The blue components (Linear projection and T5) are the trainable parameters. + +We normalize the landmarks by scaling them to fit in a unit bounding box across the duration of the clip. We represent landmarks that are not present in a frame with a large negative value. MediaPipe also predicts visibility (self-occlusion) of landmarks within the frame, which we ignore. To reduce sequence length, we discard every second frame. The final preprocessed input is therefore a half-frame rate sequence of 255-dimensional landmark vectors. Note that this half frame rate may vary from 7.5 to 30fps depending on the original video's frame rate, though most end up at 12 to 15 fps. + +# 4.2 Model + +Our model is a slightly modified version of T5 [33], which is an encoder-decoder Transformer [41] that has been trained on web-crawled English text. Rather than embed text tokens using a vocabulary of learned embeddings, we embed each 255-dimensional landmark frame into the encoder using a learned linear projection layer. Otherwise, our architecture is identical to T5.1.1-Base. + +We set the encoder context window to 256 tokens (frames) and the decoder context window to 128 tokens, which accommodate the training examples after halving the input frame rate and encoding the target text with T5's SentencePiece vocabulary [23]. + +# 5 Experiments + +We choose not to provide train, validation, and test splits for YouTube-ASL. Because YouTube videos may be deleted over time, the validation and test splits could not serve as a stable benchmark. We instead evaluate on How2Sign [13], a studio-recorded dataset released under CC BY-NC 4.0 consisting of "How To" instructional narratives translated from English into ASL. This also allows us to integrate trends towards more robust evaluation from speech and text modeling [32, 15, 10], where models trained on large web corpora are evaluated both zero-shot and finetuned on independently constructed benchmarks. + +Practices for constructing test sets in prior sign language dataset works are mixed. For example, OpenASL [39] and AfriSign [17] construct their test sets by randomly splitting at the sentence level; SP-10 [42] does the same but with multiway translations identified as a single sentence. How2Sign [13] samples document-level narratives rather than individual sentences, but most signers are shared between the train and test sets, and some narratives are present in both the train and test sets, translated by different signers. BOBSL [2] invests substantial effort into creating signer-independent, topic-balanced splits; this is perhaps why its translation baseline scores only 1.00 BLEU despite the dataset's size. Zero-shot evaluation lets us sidestep these issues and get a better sense of the model's quality for real use. + +Table 3: Metrics for ASL to English translation on How2Sign. Our models either train from scratch or finetune a pretrained T5 checkpoint, and are trained on How2Sign (H2S) only, YouTube-ASL (YT-ASL) only, a mixture of H2S and YT-ASL, or YT-ASL and then finetuned on H2S. + +
ApproachTraining ScheduleBLEU-1BLEU-2BLEU-3BLEUBLEURT
Álvarez et al. [3]H2S17.407.693.972.21-
GloFE-VN [25]H2S14.947.273.932.2431.65
Tarrés et al.[40]H2S34.0119.3012.188.03-
Ours (no pretraining)H2S13.924.691.820.8630.65
YT-ASL14.535.472.611.4129.55
YT-ASL + H2S28.6014.568.685.6037.72
YT-ASL → H2S28.3815.419.556.2639.40
Ours (pretrained)H2S14.965.112.261.2229.98
YT-ASL20.9310.356.143.9534.98
YT-ASL + H2S36.3523.0016.1311.8944.78
YT-ASL → H2S37.8224.1316.9212.3946.63
+ +# 5.1 Setup + +We ablate across four different training schedules: + +- H2S. We train only on How2Sign, not YouTube-ASL, for a like-for-like comparison with prior methods. +- YT-ASL: We train only on YouTube-ASL, and evaluate on How2Sign zero-shot. +- YT-ASL + H2S: We train on a mixture of How2Sign and YouTube-ASL, mixed in proportion to the size of the datasets. +- YT-ASL $\rightarrow$ H2S: We train on YouTube-ASL, then finetune on How2Sign. + +We also ablate the effect of pretraining on English text by comparing models trained from scratch using the T5.1.1-Base architecture, vs. finetuned from the T5.1.1-Base pretrained checkpoint. + +We train with a batch size of 128 and learning rate of 0.001 with Adafactor [36]; other hyperparameters are the T5X defaults. For models trained solely on How2Sign data, we train for 20,000 steps. For models trained on YouTube-ASL (including with How2Sign mixed in), we train for 200,000 steps. When finetuning on How2Sign after training on YouTube-ASL, we finetune for an additional 5,000 steps. Each 1,000 steps takes approximately 0.25 TPUv4-hours. + +Following prior work, we present BLEU [29] and BLEURT [34] scores. BLEU scores are computed using SacreBLEU [30] version 2, with all default options. BLEURT scores are computed using checkpoint BLEURT-20 [31, 14]. We decode using beam search with a beam width of 5. + +# 5.2 Results + +See Table 3 for metrics comparing our models to prior works on How2Sign [3, 25, 40]. The best results come from training on YouTube-ASL from a pretrained checkpoint, then finetuning on How2Sign, which achieves 12.39 BLEU vs. the state of the art of 8.03 BLEU [40]. The base model achieves 3.95 BLEU zero-shot, which is nontrivial but substantially worse than the finetuned score. Factors that could contribute to this gap include train/test leakage of signers and narratives, How2Sign's narrow domain, and the extra $\sim 10\%$ training data it represents. + +Results are substantially worse when training from scratch, which suggests that T5's English pretraining gives the model a better initialization, as De Coster et al. [11] found for frozen pretrained language models. Results are abysmal when trained without YouTube-ASL. The most direct comparison of our approach to prior work is T5 trained from scratch on How2Sign only, which reaches just 0.86 BLEU, despite training on the same data as Tarrés et al. [40]'s 8.03 BLEU. This might be explained by their use of a pretrained video encoder and various decisions they made to optimize for small amounts of data (smaller network, more text normalization, careful hyperparameter sweep), whereas we used a less tuned configuration that was intended for larger datasets. + +Table 4: Qualitative examples from our best finetuned and zero-shot models, on sentences sampled from How2Sign by Tarrés et al. [40]. See Table 5 in the Appendix for the complete set of examples. + +
(1)ReferenceAnd that's a great vital point technique for women's self defense.
Tarrés et al.It's really a great point for women's self defense.
Ours (zero-shot)It's really great, especially for women who are facing barriers.
Ours (finetuned)It's really great for women's self defense.
(2)ReferenceIn this clip I'm going to show you how to tape your cables down.
Tarrés et al.In this clip I'm going to show you how to improve push ups.
Ours (zero-shot)This video will show how to use the code online.
Ours (finetuned)In this clip we're going to show you how to cut a piece of clay.
(3)ReferenceIn this segment we're going to talk about how to load your still for distillation of lavender essential oil.
Tarrés et al.Ok, in this clip, we're going to talk about how to fold the ink for the lid of the oil.
Ours (zero-shot)This video will discuss how to submit a digital form for the survey.
Ours (finetuned)In this clip we're going to talk about how to feed a set of baiting lizards for a lava field oil.
+ +See Table 4 for qualitative examples of the translations produced by our best finetuned and zero-shot models, on sentences sampled from How2Sign by Tarrés et al. [40]. The translations capture elements of the reference translation but are clearly not yet of usable quality. The zero-shot predictions hew less closely to the references, but the errors usually make sense in light of the sign language input. For example, in (1), the sign used to mean "defense" also means "barrier". + +# 6 Conclusion + +In this paper, we presented YouTube-ASL, a new, publicly available parallel corpus for American Sign Language and English that is $\sim 3x$ the size and has $\sim 10x$ as many unique signers as the largest prior ASL dataset. Our key improvement over prior work is that we used automatic tagging followed by human filtering to increase mining recall without harming precision. We demonstrated the value of this data with a simple baseline built from off-the-shelf components (MediaPipe Holistic and T5) that achieves a new finetuned state of the art in ASL to English translation on How2Sign, 12.39 BLEU. We also reported a zero-shot score of 3.95 BLEU, a first for sign language translation. We hope that YouTube-ASL will be immediately useful for research on methods for sign language translation and caption alignment, as well as tools for automatic annotation/filtering of new sign language datasets. Because YouTube-ASL has so much signer variety, including across dialect and skill level, it may be less useful for generation than recognition tasks. + +While our baseline improves upon prior work, even the finetuned translations are subjectively low-quality and are not yet useful in the real world. We hope that more refined modeling approaches will provide better results with the same data, but despite our and prior efforts, ASL is still a low-resource language by modern standards [20]—let alone the many other sign languages of the world, most of which are even less resourced. Future work may look to address this by mining broader datasets with other kinds of supervision, and exploring multilingual transfer at larger scales. As model quality improves, future work should perform more comprehensive evaluations to understand differences across domains, dialects, levels of fluency, signer appearance, and other such factors. + +# 7 Ethical Considerations + +Sign language datasets pose privacy challenges, because the signer's appearance (body, facial expressions), which is personally identifying, is a vehicle for the language itself. Video anonymization techniques are not yet mature enough to be useful in this regard. Our corpus is composed of sign language content that uploaders made publicly visible on YouTube, and we release only video IDs so that changes to the underlying videos are automatically reflected in the corpus. While the corpus cov + +ers a broader variety of channels than prior works, this does not mean it is necessarily representative of the signing population—or even if it were representative, that models trained on it would work equally well for everyone. + +We train our models on reduced poses as a form of anonymization, but this is not suitable for all modeling approaches and may harm model quality. Until sign language translation models are closer to usable quality, there is little risk of societal harm, except that individuals or organizations mistakenly rely on models that are inadequate. As we approach that point, sign language processing will adopt the risks of natural language processing in general, but with a great potential to improve accessibility for Deaf/Hard of Hearing people. + +# References + +[1] Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. YouTube-8M: A large-scale video classification benchmark, 2016. URL https://arxiv.org/abs/1609.08675. +[2] Samuel Albanie, Gül Varol, Liliane Momeni, Hannah Bull, Triantafyllos Afouras, Himel Chowdhury, Neil Fox, Bencie Woll, Rob Cooper, Andrew McParland, and Andrew Zisserman. BBC-Oxford British Sign Language dataset, 2021. URL https://arxiv.org/abs/2111.03635. +[3] Patricia Cabot Álvarez, Xavier Giro Nieto, and Laia Tarrés Benet. Sign language translation based on transformers for the How2Sign dataset. 2022. URL https://imatge.upc.edu/web/publications/sign-language-translation-based-transformers-how2sign-dataset. +[4] Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. Sign language recognition, generation, and translation: An interdisciplinary perspective. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '19, page 16-31, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366762. doi: 10.1145/3308561.3353774. URL https://doi.org/10.1145/3308561.3353774. +[5] Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. Neural sign language translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. URL https://doi.org/10.1109/CVPR.2018.00812. +[6] Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. Sign language transformers: Joint end-to-end sign language recognition and translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. URL https://doi.org/10.1109/CVPR42600.2020.01004. +[7] Necati Cihan Camgoz, Ben Saunders, Guillaume Rochette, Marco Giovanelli, Giacomo Inches, Robin Nachtrab-Ribback, and Richard Bowden. Content4all open research sign language translation datasets, 2021. URL https://arxiv.org/abs/2105.02351. +[8] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(01):172-186, jan 2021. ISSN 1939-3539. doi: 10.1109/TPAMI.2019.2929257. URL https://doi.ieeecomputersociety.org/10.1109/TPAMI.2019.2929257. +[9] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. URL https://doi.org/10.1109/CVPR.2017.502. +[10] Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. FLEURS: Few-shot learning evaluation of universal representations of speech. In 2022 IEEE Spoken Language Technology Workshop (SLT), pages 798-805, 2023. URL https://doi.org/10.1109/SLT54892.2023.10023141. + +[11] Mathieu De Coster, Karel D'Oosterlinck, Marija Pizurica, Paloma Rabaey, Severine Verlinden, Mieke Van Herreweghe, and Joni Dambre. Frozen pretrained transformers for neural sign language translation. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 88-97, Virtual, August 2021. Association for Machine Translation in the Americas. URL https://aclanthology.org/2021.mtsummit-at4ssl.10. +[12] Mathieu De Coster, Dimitar Shterionov, Mieke Van Herreweghe, and Joni Dambre. Machine translation from signed to spoken languages: State of the art and challenges. Universal Access in the Information Society, pages 1-27, 2023. URL https://doi.org/10.1007/s10209-023-00992-1. +[13] Amanda Duarte, Shruti Palaskar, Lucas Ventura, Deepti Ghadiyaram, Kenneth DeHaan, Florian Metze, Jordi Torres, and Xavier Giro-i Nieto. How2Sign: A large-scale multimodal dataset for continuous American Sign Language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2735-2744, June 2021. URL https://doi.org/10.1109/CVPR46437.2021.00276. +[14] Google-Research. Google-research/bleurt: Bleurt is a metric for natural language generation based on transfer learning. URL https://github.com/google-research/bleurt. +[15] Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. The flores-101 evaluation benchmark for low-resource and multilingual machine translation, 2021. +[16] Ivan Grishchenko and Valentin Bazarevsky. Mediapipe holistic - simultaneous face, hand and pose prediction, on device, Dec 2020. URL https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html. +[17] Shester Gueuwou, Kate Takyi, Mathias Müller, Marco Stanley Nyarko, Richard Adade, and Rose-Mary Owusuaa Mensah Gyening. Afrisign: Machine translation for african sign languages. In 4th Workshop on African Natural Language Processing, 2023. URL https://openreview.net/forum?id=EHldk3J2xk. +[18] Thomas Hanke, Marc Schulder, Reiner Konrad, and Elena Jahn. Extending the Public DGS Corpus in size and depth. In Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, pages 75-82, Marseille, France, May 2020. European Language Resources Association (ELRA). ISBN 979-10-95546-54-2. URL https://aclanthology.org/2020/signlang-1.12. +[19] Abhinav Joshi, Ashwani Bhat, Pradeep S, Priya Gole, Shashwat Gupta, Shreyansh Agarwal, and Ashutosh Modi. CISLR: Corpus for Indian Sign Language recognition. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10357-10366, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.707. URL https://aclanthology.org/2022.emnlp-main.707. +[20] Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monjit Choudhury. The state and fate of linguistic diversity and inclusion in the nlp world, 2021. URL https://arxiv.org/abs/2004.09095. +[21] Hamid Reza Vaezi Joze and Oscar Koller. MS-ASL: A large-scale data set and benchmark for understanding american sign language. CoRR, abs/1812.01053, 2018. URL http://arxiv.org/abs/1812.01053. +[22] Sang-Ki Ko, Chang Jo Kim, Hyedong Jung, and Choongsang Cho. Neural sign language translation based on human keypoint estimation. Applied Sciences, 9(13), 2019. ISSN 2076-3417. doi: 10.3390/app9132683. URL https://www.mdpi.com/2076-3417/9/13/2683. +[23] Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference + +on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012. +[24] Dongxu Li, Cristian Rodriguez, Xin Yu, and Hongdong Li. Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison. In The IEEE Winter Conference on Applications of Computer Vision, pages 1459-1469, 2020. URL https://doi.org/10.1109/WACV45572.2020.9093512. +[25] Kezhou Lin, Xiaohan Wang, Linchao Zhu, Ke Sun, Bang Zhang, and Yi Yang. Gloss-free end-to-end sign language translation, 2023. URL https://arxiv.org/abs/2305.12876. +[26] Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, Chuo-Ling Chang, Ming Guang Yong, Juhyun Lee, Wan-Teh Chang, Wei Hua, Manfred Georg, and Matthias Grundmann. MediaPipe: A framework for building perception pipelines, 2019. URL https://arxiv.org/abs/1906.08172. +[27] Amit Moryossef, Ioannis Tsochantaridis, Joe Dinn, Necati Cihan Camgoz, Richard Bowden, Tao Jiang, Annette Rios, Mathias Muller, and Sarah Ebling. Evaluating the immediate applicability of pose estimation for sign language recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 3434-3440, June 2021. URL https://doi.org/10.1109/CVPRW53098.2021.00382. +[28] Mathias Müller, Sarah Ebling, Eleftherios Avramidis, Alessia Battisti, Michèle Berger, Richard Bowden, Annelies Braffort, Necati Cihan Camgoz, Cristina España-bonet, Roman Grundkiewicz, Zifan Jiang, Oscar Koller, Amit Moryossef, Regula Perrollaz, Sabine Reinhard, Annette Rios, Dimitar Shterionov, Sandra Sidler-miserez, and Katja Tissi. Findings of the first WMT shared task on sign language translation (WMT-SLT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 744-772, Abu Dhabi, United Arab Emirates (Hybrid), December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.wmt-1.71. +[29] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://aclanthology.org/P02-1040. +[30] Matt Post. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels, October 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W18-6319. +[31] Amy Pu, Hyung Won Chung, Ankur Parikh, Sebastian Gehrmann, and Thibault Sellam. Learning compact metrics for MT. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 751-762, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.58. URL https://aclanthology.org/2021.emnlp-main.58. +[32] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022. URL https://arxiv.org/abs/2212.04356. +[33] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1), jan 2020. ISSN 1532-4435. URL https://dl.acm.org/doi/abs/10.5555/3455716.3455856. +[34] Thibault Sellam, Dipanjan Das, and Ankur Parikh. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.704. URL https://aclanthology.org/2020.acl-main.704. + +[35] Prem Selvaraj, Gokul Nc, Pratyush Kumar, and Mitesh Khapra. OpenHands: Making sign language recognition accessible with pose-based pretrained models across languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2114-2133, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.150. URL https://aclanthology.org/2022.acl-long.150. +[36] Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596-4604. PMLR, 10-15 Jul 2018. URL https://proceedings.mlr.press/v80/shazeer18a.html. +[37] Bowen Shi, Aurora Martinez Del Rio, Jonathan Keane, Jonathan Michaux, Diane Brentari, Greg Shakhnarovich, and Karen Livescu. American Sign Language fingerspelling recognition in the wild. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 145-152, 2018. URL https://doi.org/10.1109/SLT.2018.8639639. +[38] Bowen Shi, Aurora Martinez Del Rio, Jonathan Keane, Diane Brentari, Greg Shakhnarovich, and Karen Livescu. Fingerspelling recognition in the wild with iterative visual attention. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. URL https://doi.org/10.1109/ICCV.2019.00550. +[39] Bowen Shi, Diane Brentari, Greg Shakhnarovich, and Karen Livescu. Open-domain sign language translation learned from online video, 2022. URL https://arxiv.org/abs/2205.12870. +[40] Laia Tarres, Gerard I. Gállego, Amanda Duarte, Jordi Torres, and Xavier Giro i Nieto. Sign language translation from instructional videos, 2023. URL https://arxiv.org/abs/2304.06371. +[41] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. +[42] Aoxiong Yin, Zhou Zhao, Weike Jin, Meng Zhang, Xingshan Zeng, and Xiaofei He. MLSLT: Towards multilingual sign language translation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5099-5109, 2022. URL https://doi.org/10.1109/CVPR52688.2022.00505. +[43] Kayo Yin and Jesse Read. Better sign language translation with STMC-transformer. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5975-5989, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.525. URL https://aclanthology.org/2020.coling-main.525. +[44] Kayo Yin, Amit Moryossef, Julie Hochgesang, Yoav Goldberg, and Malihe Alikhani. Including signed languages in natural language processing, 2021. URL https://arxiv.org/abs/2105.05222. +[45] Biao Zhang, Mathias Müller, and Rico Sennrich. SLTUNET: A simple unified model for sign language translation. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=EBS4C77p_5S. +[46] Hao Zhou, Wengang Zhou, Yun Zhou, and Houqiang Li. Spatial-temporal multi-cue network for continuous sign language recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):13009-13016, Apr. 2020. doi: 10.1609/aaai.v34i07.7001. URL https://ojs.aaaai.org/index.php/AAAI/article/view/7001. + +[47] Hao Zhou, Wengang Zhou, Weizhen Qi, Junfu Pu, and Houqiang Li. Improving sign language translation with monolingual data by sign back-translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1316–1325, June 2021. URL https://doi.org/10.1109/CVPR46437.2021.00137. + +# A Appendix + +This document contains supplementary material for the YouTube-ASL paper. + +# A.1 Full Qualitative Results + +Table 5: The complete set of qualitative examples from our best finetuned and zero-shot models, on sentences sampled from How2Sign by Tarrés et al. [40]. + +
(1)ReferenceAnd that's a great vital point technique for women's self defense.
Tarrés et al.It's really a great point for women's self defense.
Ours (zero-shot)It's really great, especially for women who are facing barriers.
Ours (finetuned)It's really great for women's self defense.
(2)ReferenceIn this clip I'm going to show you how to tape your cables down.
Tarrés et al.In this clip I'm going to show you how to improve push ups.
Ours (zero-shot)This video will show how to use the code online.
Ours (finetuned)In this clip we're going to show you how to cut a piece of clay.
(3)ReferenceIn this segment we're going to talk about how to load your still for distillation of lavender essential oil.
Tarrés et al.Ok, in this clip, we're going to talk about how to fold the ink for the lid of the oil.
Ours (zero-shot)This video will discuss how to submit a digital form for the survey.
Ours (finetuned)In this clip we're going to talk about how to feed a set of baiting lizards for a lava field oil.
(4)ReferenceYou are dancing, and now you are going to need the veil and you are going to just grab the veil as far as possible.
Tarrés et al.So, once you're belly dancing, once you've got to have the strap, you're going to need to grab the thumb, and try to avoid it.
Ours (zero-shot)he's dancing a lot. Now he needs a hat and a chain
Ours (finetuned)Their hopping and dancing is now, they're going to need their squat and squat and they're going to be able to move independently.
(5)ReferenceBut if you have to setup a new campfire, there's two ways to do it in a very low impact; one is with a mound fire, which we should in the campfire segment earlier and the other way to setup a low impact campfire is to have a fire pan, which is just a steel pan like the top of a trash can.
Tarrés et al.And other thing I'm going to talk to you is a little bit more space, a space that's what it's going to do, it's kind of a quick, and then I don't want to take a spray skirt off, and then I don't want it to take it to the top of it.
Ours (zero-shot)But if you have to set up a new campfire, you have to set up a campfire. You have to do it in a campfire, or set up a tentfire.
Ours (finetuned)But if you have to set up a new campfire, there are two ways to do a low impact fire, one is a cone fire, which we have to do in the tent earlier, and the other one is to set up a campfire in a fire pan.
(6)ReferenceSo, this is a very important part of the process.
Tarrés et al.It's a very important part of the process.
Ours (zero-shot)Wash your hands.
Ours (finetuned)Alright, let's get started.
+ +# B Datasheets for Datasets + +We provide documentation of the dataset based on Datasheets for Datasets 7. + +7https://arxiv.org/pdf/1803.09010.pdf + +# B.1 Motivation + +For what purpose was the dataset created? The dataset was created primarily to serve as training data for ASL to English machine translation; prior datasets are smaller and have fewer unique signers. We used human annotators to identify high-quality ASL videos with well-aligned captions, but it was not feasible to manually correct or align any of the included captions. This is generally sufficient for translation, but slightly less ideal for tasks like ASL to English caption alignment, where consistent alignment standards might be desired. The dataset is probably also less suitable for English to ASL translation, due to the signing variation across videos, though this may be addressed with methods for improved controllability. + +Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? This dataset was created by Dave Uthus, Garrett Tanzer and Manfred Georg for Google. + +Who funded the creation of the dataset? Google. + +# B.2 Composition + +What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Each instance is the video id of a YouTube video. + +How many instances are there in total (of each type, if appropriate)? There are 11,093 video ids. + +Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? This contains a subset of videos available on YouTube of people signing in American Sign Language with English captions. This is not strictly representative of the larger set, as we have applied a combination of automatic and manual filtering techniques to find high quality videos with high-quality captions. + +What data does each instance consist of? Each instance consists of a single video id, which represents an ASL video with associated English captions. + +Is there a label or target associated with each instance? The English captions may be considered the target for each instance, but this depends on the task being attempted. + +Is any information missing from individual instances? No. + +Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? No. + +Are there recommended data splits (e.g., training, development/validation, testing)? No. YouTube datasets can change over time due to the nature of the platform (videos can be made private or deleted), thus there are no recommended splits. + +Are there any errors, sources of noise, or redundancies in the dataset? Yes. During the annotation process, we allowed the annotators to mark whole channels with the same annotations, so there may be some videos which are not of the same quality as the rest of the channel. There may also be minor errors in the videos or captions that the annotators explicitly deemed acceptable. + +Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is not self-contained, as it consists of YouTube video ids only. As such, there is no guarantee that the dataset will remain constant over time. The actual videos themselves fall under YouTube's Terms of Service https://www.youtube.com/static? template=terms. + +Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor- patient confidentiality, data that includes the content of individuals' non-public communications)? No, the dataset consists of video ids for public videos only, and does not rehost any of the underlying data. + +Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? The video ids comprising the dataset itself are random identifiers. The videos referenced by our video ids are hosted by YouTube and therefore subject to YouTube's community guidelines. We or our annotators did not encounter any such content. + +Does the dataset identify any subpopulations (e.g., by age, gender)? The dataset does not identify any subpopulations. + +Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? The video ids comprising the dataset itself are random identifiers. The videos referenced by our video ids include people signing in ASL, which inherently includes the person's appearance. Our dataset does not provide any extra information about these people that was not already publicly available from their uploaded videos, and respects when videos are deleted/made private by virtue of using video ids. + +Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? As with the previous question, the videos referenced by our video ids may contain information about many topics, if the signer chose to discuss that information in their publicly uploaded video. Our dataset does not provide any extra information and respects deletions. + +# B.3 Collection Process + +How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? The videos referenced by each video id instance consist solely of data that is directly observable (uploaded videos and captions). The selection of video ids is implicitly decided by a combination of automatic and manual filtering in order to ensure a relatively high quality level. + +What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? A combination of software programs and manual annotations were used to select preexisting YouTube videos. + +If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Not applicable. + +Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? The authors collected the initial collection of video ids, and annotators were hired to help annotate the videos for filtering purposes. + +Over what timeframe was the data collected? We collected videos up to January 2022. + +Were any ethical review processes conducted (e.g., by an institutional review board)? No. + +Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? The dataset consists of references to videos that include people, but doesn't collect or release any new information about those people. + +Were the individuals in question notified about the data collection? No, as we only provide videos and no further information about the videos. + +Did the individuals in question consent to the collection and use of their data? Not applicable. + +If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? As we only provide ids and not the raw content, users removing the video will make them no longer available for use in our dataset. + +Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? No. + +# B.4 Preprocessing/cleaning/labeling + +Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? Yes, we filtered out videos that were not relevant, of poor quality, had poor captions, etc. The result is a list of video ids, which point to unmodified videos. + +Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? The original set of video ids will not be made available. + +Is the software that was used to preprocess/clean/label the data available? No. + +# B.5 Uses + +Has the dataset been used for any tasks already? None prior to the baselines provided in this paper. + +Is there a repository that links to any or all papers or systems that use the dataset? No. + +What (other) tasks could the dataset be used for? In addition to the intended task of ASL to English translation, the dataset could be used for related sign language understanding tasks like caption alignment, and potentially for sign language generation tasks like English to ASL translation, or sign language tasks that do not require captions. + +Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? The dataset was filtered for high-quality videos and captions, which includes a variety of signing styles and skill levels, as long as they can be understood by an ASL user and are captioned correctly. This amount of variety may be unideal for tasks like generation where consistency is preferred. Even though it is varied, it is still not necessarily representative of the signing community as a whole, and should be treated as such. + +Are there tasks for which the dataset should not be used? This dataset should not be used as a benchmark for comparing models across time, because the data that YouTube-ASL is derived from will change over time with video deletions or other modifications. + +Any other comments? + +# B.6 Distribution + +Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? The dataset is open sourced. + +How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? GitHub. + +When will the dataset be distributed? It is currently available. + +Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? We release the YouTube-ASL video ids under CC BY 4.0 International license, while the actual videos/captions on YouTube are preexisting and subject to the YouTube Terms of Service (https://www.youtube.com/static?template=terms). + +Have any third parties imposed IP-based or other restrictions on the data associated with the instances? See above for license information. + +Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? No. + +# B.7 Maintenance + +Who will be supporting/hosting/maintaining the dataset? The authors will be responsible for maintaining the dataset. + +How can the owner/curator/manager of the dataset be contacted (e.g., email address)? By contacting any of the authors listed on the publication. + +Is there an erratum? No. + +Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? It may be updated, and if so, updates will be communicated via the associated GitHub page. + +If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be + +retained for a fixed period of time and then deleted)? Our dataset is constructed similarly to past YouTube-related datasets, in that we only provide video ids. Thus, if a user makes their YouTube video private or deletes it, this will then no longer be available for use. + +Will older versions of the dataset continue to be supported/hosted/maintained? No, if we need to remove older versions, these will be communicated on the associated GitHub page. + +If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? No, we are currently not planning to allow formal contributions to the dataset at this time, but others may extend the dataset on their own in accordance with the license. + +# C Additional Information + +URL to the data: https://github.com/google-research/google-research/tree/master/youtube_asl + +Hosting and maintenance: The data website is on GitHub under Google Research's shared GitHub repository, while the data itself is hosted on Google Research's shared Google Cloud Service. + +# D Author Statement + +The authors bear all responsibility in case of violation of rights, and confirm that this dataset is open-sourced under the CC BY 4.0 International license. \ No newline at end of file diff --git a/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/images.zip b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1d58d6e573e5bacbffa7bdc2c7625f5112779fd8 --- /dev/null +++ b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8ffbb7bef52c92ccace253430da515472643e1bc8d247ac33907367b34da535 +size 639259 diff --git a/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/layout.json b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b64c03ecce26021b3bb480a1a481cb255fc43656 --- /dev/null +++ b/youtubeaslalargescaleopendomainamericansignlanguageenglishparallelcorpus/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ef7919114ea5ae474ebc027fc97ae68574dc457230fbda2b3f5e4003f66c1eb +size 393132 diff --git a/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_content_list.json b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3584b9a3ba2566e705603a7b28af9808862161d6 --- /dev/null +++ b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac5a6cbf806d94f0d27dca750265b17e61fe2f7fa3b308792d3d1761dd3e3cc7 +size 124686 diff --git a/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_model.json b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fe6d40a39110812a6fdd6162b5ba4a1da77157f8 --- /dev/null +++ b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f41d5f2d89a3618da7df198890c494ce8d2acbc7714b0abedd764287cee2554 +size 148685 diff --git a/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_origin.pdf b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4c08fcb640d476c39257ff71889917da4ea09e60 --- /dev/null +++ b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/d9c4049e-a102-43ef-ba7b-404c7d5f3d20_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08a18adc04589de7b8f42f58025301132b53b6ca4eeaf1b2134cc8b42cd0fb9b +size 9721428 diff --git a/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/full.md b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7f6dc8ac79d963ac91bad6328bfaaf6615ea5c64 --- /dev/null +++ b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/full.md @@ -0,0 +1,453 @@ +# YouTubePD: A Multimodal Benchmark for Parkinson's Disease Analysis + +Andy Zhou $^{1*}$ Samuel Li $^{1*}$ Pranav Sriram $^{2*}$ Xiang Li $^{1*}$ Jiahua Dong $^{1*}$ Ansh Sharma $^{1}$ Yuanyi Zhong $^{1}$ Shirui Luo $^{3}$ Maria Jaromin $^{3}$ Volodymyr Kindratenko $^{1,2,3}$ George Heintz $^{4}$ Christopher Zallek $^{5}$ Yu-Xiong Wang $^{1,2,3}$ + +$^{1}$ Department of Computer Science, University of Illinois Urbana-Champaign + $^{2}$ Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign + $^{3}$ National Center for Supercomputing Applications, University of Illinois Urbana-Champaign + $^{4}$ Health Care Engineering Systems Center, University of Illinois Urbana-Champaign + $^{5}$ OSF Healthcare Illinois Neurological Institute - Neurology + +https://uiuc-yuxiong-lab.github.io/YouTubePD + +# Abstract + +The healthcare and AI communities have witnessed a growing interest in the development of AI-assisted systems for automated diagnosis of Parkinson's Disease (PD), one of the most prevalent neurodegenerative disorders. However, the progress in this area has been significantly impeded by the absence of a unified, publicly available benchmark, which prevents comprehensive evaluation of existing PD analysis methods and the development of advanced models. This work overcomes these challenges by introducing YouTubePD – the first publicly available multimodal benchmark designed for PD analysis. We crowd-source existing videos featured with PD from YouTube, exploit multimodal information including in-the-wild videos, audios, and facial landmarks across $200+$ subject videos, and provide dense and diverse annotations from a clinical expert. Based on our benchmark, we propose three challenging and complementary tasks encompassing both discriminative and generative tasks, along with a comprehensive set of corresponding baselines. Experimental evaluation showcases the potential of modern deep learning and computer vision techniques, in particular the generalizability of the models developed on our YouTubePD to real-world clinical settings, while revealing their limitations. We hope that our work paves the way for future research in this direction. + +# 1 Introduction + +As one of the most prevalent neurodegenerative disorders, Parkinson's Disease (PD) affects over 10 million people worldwide, and the number of PD subjects is projected to double within the next 20 years [2]. Notably, PD is a progressive disease, with symptoms becoming increasingly pronounced and severe over time. Meanwhile, the diagnostic and treatment costs associated with PD are substantial, amounting up to $14 billion in the United States alone [2]. Therefore, there is an urgent need for developing AI-assisted systems for automated PD assessment in the healthcare and AI communities. Such systems can facilitate the recognition of undiagnosed people with PD, aid clinicians in their evaluation, and play a crucial role in the continuing care of people with PD by monitoring disease progression and tracking responses to therapies. + +However, the progress in this area has been significantly impeded by the absence of a unified, publicly available benchmark. For AI-driven applications, the first and foremost endeavor lies in establishing + +![](images/d80506bf27f9d0827e0f36c2f76eb5e2a307b6a3a0bcaddc3c8eb125d1944586.jpg) +Figure 1: We propose YouTubePD, an open-source, multimodal benchmark on Parkinson's Disease (PD) analysis. Our dataset contains in-the-wild videos, audios, and facial landmarks. On the right, we show the three tasks on our benchmark: (a) facial-expression-based PD classification, (b) multimodal PD classification, and (c) PD progression synthesis. + +![](images/40cbd33f745b3dccb0d8ee7afed53c90d368eff2d618e84a45ba9e722d54f9e3.jpg) +(a) + +![](images/32fa9aa826dcb007be66a40f210a7478dc220f0939554280ed4c75b469209c0b.jpg) +(b) + +![](images/400d767b685b0b8059e77c062d4088cc68998a08e2473df3f8d6984899ea3b8f.jpg) +(c) + +a suitable benchmark. Ideally, the benchmark is openly accessible to facilitate methodology development, exemplified by the notable ImageNet benchmark that has substantially catalyzed object recognition performance [12]. In contrast, existing PD datasets are typically constructed in clinical settings and are thus kept private to protect patient privacy [3, 5, 17, 19, 25, 33, 41, 48]. In addition, the creation of PD datasets is a costly, time-consuming process, which requires a significant amount of effort for collecting patient data and curating annotations from qualified clinical experts. + +In this paper, we overcome the aforementioned challenges by introducing an open benchmark that not only enables comparisons of existing PD analysis techniques, but also facilitates the development of advanced models that leverage state-of-the-art computer vision and deep learning techniques. Our key insight is that, instead of curating data from clinical settings as is normally the case, we exploit publicly available online resources and crowd-source existing videos featured with PD from YouTube. + +These YouTube videos contain rich examination modalities, making our benchmark inherently multimodal. This property allows for the investigation of some particularly indicative symptoms associated with PD. More concretely, PD is often characterized by the manifestation of a motor-based symptom known as facial bradykinesia – the lessened movement of orofacial muscles and a reduction in spontaneous emotional expressions [8]. This results in a constant “mask-like” expression in PD subjects referred to as hypomimia [8], a sensitive biomarker for PD [1]. Following prior work [3, 5, 19, 25, 48], our benchmark considers hypomimia as a primary indicator for PD diagnosis. Moreover, our benchmark includes other symptoms experienced by PD patients, including tremors and speech changes. These additional symptoms are considered alongside hypomimia, providing a more comprehensive evaluation of the disease and capitalizing towards a multimodal solution. + +# Our contribution is described as follows: + +1. To tackle the lack of a unified, open, and multimodal benchmark in PD analysis, we introduce YouTubePD. By leveraging YouTube videos of public figures experiencing PD, we obtain high-quality, in-the-wild data of Parkinson's symptoms. As shown in Figure 1, our YouTubePD benchmark consists of multiple modalities, including facial expression videos, facial landmarks, and audios, across $200+$ subject videos containing dense and diverse annotations from a clinical expert. Our annotations encompass both video-level and frame-specific information with informative region and textual descriptions. In addition, our dataset spans a number of years for each subject, providing natural PD progression information. +2. We define three challenging tasks for our YouTubePD benchmark: Facial-expression-based PD classification, multimodal PD classification, and PD progression synthesis. +3. We present a comprehensive set of baselines for the three proposed tasks. In contrast to prior deep learning methodologies presented for PD analysis, our baselines leverage more recent, state-of-the-art techniques for enhanced video and multimodal understanding. We demonstrate that these models developed on our benchmark transfer well to real-world clinical datasets and exhibit strong performance on the PD classification task. + +# 2 Related Work + +In this section, we discuss prior work and studies on facial expressivity and hypomimia in PD, as well as previous vision-based frameworks used to address PD classification. + +
Dataset# Videos# Images# PD Subjects# Healthy ControlsOpen AccessModality
Abrami et al. [3]107-68-XVideo
Bandini et al. [5]306-1717XVideo
Grammatikopoulou et al. [19]-62362211071XImage
Jin et al. [25]176-3331XVideo
Novotny et al. [33]166-9175XVideo
Su et al. [41]172-4739XVideo
FacePark-GITA [18]270-3024XVideo
YouTubePD (Ours)283-1689Video, Audio, Landmark
+ +Table 1: Comparison of statistics between datasets used in prior work and our benchmark. Our YouTubePD is the first open-access and multimodal benchmark for PD analysis. + +Facial expressivity in PD. The connection between PD and facial expressivity has been thoroughly studied in prior work across a variety of experimental scenarios and controlled variables [9, 37, 40, 44, 47]. Such work explores the role of facial expressivity (both posed and voluntary expressions) and emotion recognition in patients in relation to their diagnosis of PD, primarily based on the UPDRS-III scale [16]. Follow-up work [33, 38, 44, 45, 47, 48] expands upon these seminal studies on facial expressivity in PD, further exploring its relationships with the subjective emotional experience, the correlation with PD severity, and de-novo conditions. + +Video-based PD assessment. Video-based PD assessment techniques are typically categorized into geometric approaches [5, 19, 25, 33, 41] and appearance-based approaches [3, 17]. The former type employs low-dimensional, geometric features extracted from facial landmarks and action units (AUs) [5, 17, 19, 33, 41]. The latter type uses the raw visual information contained within the videos. Some of these approaches apply convolutional neural networks [3, 17], support vector machines [5], and long short-term memory (LSTM) models [25]. + +PD benchmarks. Previous studies commonly use their own (private or semi-private) benchmarks to evaluate the performance of their methods, lacking a shared benchmark for comparison. Table 1 provides a summary of these datasets, which often exhibit variations in size and distribution (e.g., mean age, gender, and disease progression). Consequently, comparing the performance of different methods becomes challenging, further hindering advancements in PD classification and analysis. This is a key motivation for our work in establishing a public and common benchmark. + +# 3 Dataset + +Our objective is to identify not only the presence but also the severity of Parkinson's Disease (PD). With this in mind, we introduce YouTubePD, which stands as the first publicly available benchmark designed for PD analysis that (i) utilizes multimodal information including in-the-wild videos, audios, and facial landmarks, and (ii) enables the exploration of various task types including unimodal and multimodal PD classification, as well as PD progression synthesis. The comparison between our benchmark and datasets in various previous studies is summarized in Table 1. To further support comprehensive classification approaches, we offer video-level and region-level clinician annotations pertaining to PD diagnosis. In the interest of open access and wider research contributions, the basis of this benchmark is sourced from YouTube interview videos featuring public figures who are openly sharing their experiences with PD. Our work is among the first endeavors advocating for PD analysis using multimodal information. This approach aligns with the diagnostic techniques employed by human clinicians, who utilize a wide range of cues in order to diagnose PD. This convergence of multimodal information in our model mimics the human approach, thereby enriching the accuracy and depth of PD analysis. Our dataset collection and annotation pipeline is outlined in Figure 2. + +Videos. We manually curate a list of 16 public figures with a confirmed and open PD diagnosis and collect multiple videos (spanning multiple years) of the individual before and after their diagnosis. We collect a total of 283 videos from YouTube. Each video is roughly 10 seconds long of an individual public figure speaking in an interview setting. Our corpus can be divided into three subgroups: 65 videos of public figures after their diagnosis of PD was made public, 68 videos of the same public figures several years before their diagnosis, and 150 videos of a broader healthy control group of public figures without PD. We collect and store both video and audio data. The videos are preprocessed by cropping and resizing with a facial keypoint detection model [29], so that clips are centered on the subject's face. + +![](images/48a3b91b1f937e788052b7144cc7b71faa5e807a1a59a9f87fb13ba5339f401b.jpg) +Figure 2: Our dataset collection and annotation pipeline. First, we compile a list of public figures who have publicly confirmed their PD diagnosis. We then source their videos from YouTube. From these videos, we handpick clips that are informative for PD detection. A clinical expert then reviews these clips, providing both video-level and region-level annotations, detailing the severity of their PD, and highlighting specific symptoms of the condition. + +![](images/b7188ff1b5a64bbbec591013d7e59aa768b8102d0d3b8e83f0f95e3a6f982896.jpg) +(a) An example annotation of a video from our dataset. +Figure 3: Representative examples for our video annotations and important facial regions. + +![](images/ce87e7199ef9e0b114352ff14cc19485a5ab6aace2316f5fa5e89e9a07c50d39.jpg) +(b) Examples of annotated frames with corresponding bounding boxes in increasing severity. + +Annotations. Our annotations (Figure 3) are generated by a clinician. Each video has both (i) an overall severity label with 6 levels, with 0 denoting the absence of PD and 5 indicating a severe form of PD; and (ii) a confidence level label ranging from 1 to 10. A label of 1 indicates that the clinician was not confident in their assessment, while 10 indicates absolute confidence. For selected frames in each video, facial regions particularly informative for PD are also annotated. We define a set of 14 important facial regions for PD analysis, derived as a combination of facial creases [20] and other areas of facial movement observed in PD hypomimia. Each region is mapped to a distinct region index that describes the facial region used for the diagnosis and 10 distinct symptom indices (e.g., moving, apart, increasing) that describe the anomaly. In these frames, (i) a polygon bounding box is drawn around the region, (ii) a text caption is used to summarize the region and anomaly, and similar to the video-level annotations, (iii) a severity label and (iv) a confidence label are provided for each region. These annotations and indices are summarized in Section B of the Appendix. + +Audios. We provide audios as an additional modality. Each video clip has its associated audio sharing the same label. The audio clips are preprocessed to account for cases where multiple speakers are present or significant background noise or music overpowers the intended speaker. We utilize source separation [42] to clean any segments that have either of these two violations, isolating the desired subject's audios from other speakers and background noise. Though the audio modality has a distinct set of UPDRS standards [16], we do not re-announce the ground truth label for the audios but use the holistic severity and confidence annotation based on the videos instead. This aims to ensure the label consistency between the audios and other modalities to facilitate multimodal PD classification. + +Landmarks. Our dataset incorporates facial landmarks extracted from video frames to leverage facial motion. Using the landmark detection model from Face++ [29], we extract 106 2D point landmarks from each frame. The landmarks can be grouped into 7 parts: contour, left/right eyebrow, left/right eye, nose, and mouth. + +# 4 Tasks + +We propose three tasks on YouTubePD: facial-expression-based PD classification, multimodal PD classification with facial expressions, audios, and facial landmarks, and PD progression synthesis. We describe our tasks, as well as the dataset splits and evaluation metrics for each task. + +# 4.1 Facial-Expression-Based PD Classification + +Task description. Following prior work [17, 18] and clinical findings [25], we focus on facial expression as the primary modality for diagnosing PD. In this task, we aim to analyze the videos of facial expressions in our dataset and obtain a classification result for either a binary classification of PD or healthy, or a multiclass classification of PD severity on healthy or 5 severity levels. + +Dataset split. Due to the imbalanced distribution between PD-positive and PD-negative videos, we use a relatively small training set and a relatively large evaluation set. For the training set, we randomly sample 36 PD-positive and 36 PD-negative videos. We use the remaining 211 videos for the evaluation set. Due to the small number of annotated PD-positive samples, we use K-fold cross-validation on the training set for hyperparameter tuning. + +Evaluation metrics. The PD classification task on our benchmark is particularly challenging, due to its strong imbalance and limited number of positive samples. This mirrors the medical application setting of PD and presents a challenge: a model whose errors consist of mostly false negatives might still achieve a high accuracy due to the small number of positive samples, but may misdiagnose a patient with PD. This is a more harmful error than a false positive. Therefore, the conventional classification accuracy may not be the best choice for our setting. We mainly focus on additional metrics, in particular, F1 and AUROC. F1-score calculates the harmonic mean of precision and recall. For multiclass classification, we report the weighted F1-score computed with a one-vs-rest (ovr) approach. AUROC computes the area under the receiver operating characteristic curve for a set of decision thresholds and reports a summarized metric through some averaging policy (e.g., macro). + +Meanwhile, we classify the severity of PD, which is an ordinal attribute. For example, classifying severity 4 as 3 is more accurate than classifying it as 2. Therefore we also provide a Mean Squared Error (MSE) to account for this issue. To summarize, we recommend prioritizing F1-score and AUROC, while also paying attention to the MSE error. + +# 4.2 Multimodal PD Classification + +In this task, we consider the combination of multimodal information for PD classification. The multimodal inputs in our study include facial expression videos, facial landmarks, and audios. The task aims to obtain a binary or multiclass severity prediction from the collaboration of all available modalities as input. We follow the identical data split and evaluation metric as the facial-expression-based PD classification task. We also directly use the holistic label from the facial expression video as the ground truth label for each modality derived from that video. + +# 4.3 PD Progression Synthesis + +Task description. We further propose a task of PD progression synthesis, which aims to simulate the symptoms of PD in facial expressions given images of healthy individuals. Previous work on image translation primarily focuses on texture alterations derived from a reference image or content modifications from a pretrained conditional model [11, 15, 30, 35, 49, 50]. Our PD progression synthesis task, however, introduces a more challenging aspect by incorporating more nuanced PD-specific information for the transfer. Our task naturally makes use of the PD progression information provided in our dataset, which contains sets of images depicting individuals in both healthy and PD states across an extended time frame. Note that these image pairs from healthy to PD states are not strictly aligned, which further increases the task difficulty. Complementary to the discriminative tasks in Sections 4.1 and 4.2, our synthesis task provides a comprehensive overview of the progression and manifestation of PD. + +Dataset split. We divide the data by individual subjects, allocating 11 for training and 5 for evaluation. Our experiment validates that this split is sufficient for training and produces reliable evaluation results. + +Evaluation metrics. To establish baselines for our synthesis or "style" transfer task, we employ a variety of metrics that assess the quality of synthesized facial images. Our evaluation begins with the Fréchet Inception Distance (FID) score computed between the source and generated images, serving as an indicator of visual fidelity and consistency. Subsequently, we devise a facial content change evaluation rooted in the paired healthy and PD states of the same individual, called direction score. Specifically, we use a pretrained VGGFace [36] model $\phi$ to extract the instance-level feature from each image. For each healthy frame $N_{i}$ , we first identify its corresponding frame $P_{i}$ from the same + +![](images/69d2880d0caa49054751bcbf41f97f956cd2c54d7b12eec243263a5ac7a2562f.jpg) +Figure 4: Our baseline methods for the facial-expression-based PD classification (left) and multimodal PD classification (right) tasks. + +individual's PD frame with the highest feature similarity. Then, given the generated image $N_{i}^{\prime}$ , the direction score is calculated as + +$$ +D = \frac {1}{M} \sum_ {i = 1} ^ {M} \operatorname {c o s i n e} \left(\phi \left(N _ {i} ^ {\prime}\right) - \phi \left(N _ {i}\right), \phi \left(P _ {i}\right) - \phi \left(N _ {i}\right)\right), \tag {1} +$$ + +where cosine denotes the cosine similarity and $M$ is the number of frames. Finally, we assess a CLS score based on classification accuracy using a pretrained PD classification model in Section 4.1. + +# 5 Methods + +In this section, we propose baseline methods for each of our three tasks. For facial-expression-based classification, we propose an attention-based architecture that leverages both region and video annotations for stronger and more interpretable performance. For multimodal PD classification, we establish a simple fusion pipeline to exploit different modalities and improve performance. Finally, for PD progression synthesis, we make comparisons of various image translation models. The experimental results reveal limitations of different methods and the challenging nature of the tasks. We include illustrations of our baseline methods for the first two tasks in Figure 4. + +# 5.1 Facial-Expression-Based PD Classification + +We develop a simple but effective baseline method for this task. Apart from leveraging the most suitable pretrained representation, we integrate the region-level information explicitly alongside the holistic video information. Moreover, we apply an attention-based classifier and use a novel multiclass hierarchy-guided loss to train our baseline model. Our pipeline is in Figure 4. + +PD representation learning. Due to the lack of annotated PD data, we find that it is beneficial to transfer representations learned from related domains to PD classification. To this end, we use a ResNet50 [22] pretrained on VGGFace [36], a face dataset with a wide range of subjects, as our frozen feature backbone. Pretraining on the base domain helps the model learn a wide range of identities across variations in pose, environment, and demographics. We find that for our task, pretraining on general facial recognition outperforms backbones trained on video action recognition as well as facial expression recognition, contrary to [18]. + +PD informative regions. Our PD informative regions are based on areas of the face which experts typically use to diagnose patients. Our quantitative results show that the use of region features instead of entire frames not only improves performance, but also reduces the tendency to learn spurious features. Correspondingly, we use a pretrained facial landmark classification module and RoIAlign [21] to extract a feature map for each of the 14 PD informative regions from the feature map produced by the backbone from each video. + +Attention with video-level conditioning. Following feature extraction, we process the feature maps with a learnable positional embedding and a linear projection of the global feature map to the same dimension as an individual region feature map. Next, we pass our region features through two multi-headed attention blocks. Following TimeSFformer [7], we use divided spatial-temporal attention, which we apply on the region feature maps instead of raw pixels. This consists of an attention block that applies self-attention over the individual regions in each frame (spatial), followed by an attention block that applies self-attention over successive frames (temporal). We include ablation studies on the use of region annotations, region information, and attention to verify each component of our method. + +Multitask hierarchy-guided loss. In addition to region features, we leverage region annotations to further improve performance. After attention, we extract the class embedding corresponding to each + +
ModelTop-1 Acc ↑F1 ↑AUROC ↑Recall (Binary) ↑MSE (Multiclass) ↓
VGGFace [22]83.56(±0.84)/78.20(±3.13)0.56(±0.01)/0.23(±0.02)0.86(±0.01)/0.68(±0.01)0.79(±0.01)2.29(±0.77)
Ours88.00(±1.88)/77.03(±3.36)0.59(±0.02)/0.25(±0.01)0.92(±0.01)/0.74(±0.02)0.85(±0.05)2.25(±0.45)
+ +Table 2: Binary/multiclass classification results on YouTubePD with facial expression. We observe superior performance on F1, AUROC, and MSE with our method. + +
ModalityTop-1 Acc ↑F1 ↑AUROC ↑Recall (Binary) ↑MSE (Multiclass) ↓
VGGFace [22]83.56(±0.84)/78.20(±3.13)0.56(±0.01)/0.23(±0.02)0.86(±0.01)/0.68(±0.01)0.79(±0.01)2.29(±0.77)
Landmark (Ours)56.37(±1.72)/51.69(±1.94)0.32(±0.02)/0.26(±0.02)0.72(±0.02)/0.68(±0.01)0.81(±0.05)4.87(±0.28)
Audio (Ours)54.84(±1.63)/47.79(±1.66)0.27(±0.02)/0.16(±0.01)0.66(±0.01)/0.50(±0.02)0.70(±0.04)6.26(±0.39)
Multimodal (Ours)70.61(±2.13)/82.75(±2.85)0.61(±0.02)/0.28(±0.02)0.87(±0.02)/0.80(±0.03)0.83(±0.04)1.40(±0.25)
+ +Table 3: Binary/multiclass classification results on YouTubePD in the multimodal setting. Multimodal fusion further improves the performance over unimodal baselines, even when additional modalities have lower performance than the primary modality of facial expression. + +region. We add an additional multi-layer perception (MLP) head for each of the 14 regions and add the corresponding cross-entropy loss to the overall loss objective. We convert the region annotations to binary annotations, where 1 indicates that the region is indicative of PD and 0 indicates that the region is normal. We also extract a class embedding corresponding to the entire video, which is passed through a final MLP head to obtain the video-level class distribution. This is trained with a hierarchy-guided loss function demonstrated by the weighted cross-entropy loss, which we add to the region-level loss, + +$$ +\operatorname {L o s s} = \lambda l _ {\text {b i n}} (\theta (x), y _ {\text {v i d}}) + (1 - \lambda) l _ {\text {m u l t i}} (\theta (x), y _ {\text {v i d}}) + \frac {1}{| R |} \sum_ {r \in R} l _ {\text {r e g}} (\theta (x _ {r}), y _ {\text {r e g}}), \tag {2} +$$ + +where $x$ is an input image, $\theta$ is the model, $y_{\mathrm{vid}}$ is the video-level annotation, $y_{\mathrm{reg}}$ is the region-level annotation, $l_{\mathrm{bin}}$ and $l_{\mathrm{multi}}$ are the cross-entropy losses on binary and multiclass labels respectively, $l_{\mathrm{reg}}$ is the cross-entropy loss on region labels, $R$ is the set of regions, and $\lambda$ is a trade-off hyperparameters. We use both the original multiclass annotations and binary annotations for the video-level loss. We freeze the first three stages of the pretrained ResNet50 backbone and train the additional layers, as well as our attention blocks and MLP heads, end-to-end with this loss. + +# 5.2 Multimodal PD Classification + +We provide a baseline multimodal fusion method for PD classification. As shown in Figure 4, we first obtain predictions from each modality, and then use a logit fusion step to fuse them together considering their respective confidence. + +Single modal prediction. For the facial expression video modality, we utilize the same encoder as in the facial-expression-based PD classification (Section 5.1), which yields a 2,048-dimensional feature representation. We obtain the final prediction with a linear classifier. For the landmark modality, we select 10 frames from the video, extract their facial landmarks, and classify them using an MLP. The features from all selected frames are concatenated for classification. For the audio modality, we extract the features using a pretrained masked auto-encoder applied on the spectrogram representations of the audios [32]. After averaging across all frames to get a 768-dimensional video-level feature representation, an MLP is trained to obtain the video-level logits. + +Fusion strategy. We employ a simple but effective fusion strategy. For the video and landmark modalities, rather than using concatenation for the classification, we predict each frame independently. Then we sum the predictions of 10 frames to obtain two logits representing the video modality and landmark modality. After that, we apply $\mathcal{L}_1$ normalization to the logits of each modality. Thus, we treat the logits for different modalities equally. Then we directly sum them up and determine the final prediction based on the max fused logits. During training, the same multitask hierarchy-guided loss is adopted. For the facial expression and audio modules, all the pretrained weights are frozen. + +# 5.3 PD Progression Synthesis + +We investigate a wide spectrum of image translation methods, including generation-based generative adversarial networks (GANs), translation-based GANs, and recently emerging diffusion models. Specifically, for generation-based GANs, we adopt [11, 26, 34]. We use their pretrained weight on FFHQ [27] and finetune it on our positive training data. Then, we project the PD-negative face to the + +
MetricGeneration-Based GANTranslation-Based GANDiffusion Model
ADA [26]Few-Shot [34]JoJoGAN [11]HRFAE [49]CycleGAN [50]CUT [35]SD [39]
FID ↓87.0894.60129.6137.6573.8246.6372.12
CLS ↑34.2926.6737.7411.3226.4211.3266.98
Direction ↑0.28740.29120.32190.20210.28320.32370.0491
+ +Table 4: Quantitative comparisons on PD progression synthesis. Different generative methods exhibit distinct behaviors under three metrics. + +
ModelTop-1 Acc ↑F1 ↑AUROC ↑
VGGFace [22]79.40.240.66
Ours76.40.250.71
+ +Table 5: We observe similar multiclass classification performance, when conducting a leave-one-subject-out analysis over three patients. + +learned image space using the GAN model. For GANs designed for image translation, we either use their official pretrained model [49] or directly train from scratch [35, 50]. For the diffusion models, we finetune Stable Diffusion [39] with LoRA [24] and use SDEdit [30] for image translation. + +# 6 Experiments + +In this section, we present the results of our baselines and proposed methods. We find that modern computer vision methods can be applied on YouTubePD and transfer well to real clinical applications. + +Main results: Facial-expression-based PD classification. We report the average results and standard deviations over 5 different runs. The results are summarized in Table 2. For the baseline, we use a ResNet50 pretrained with VGGFace and finetuned on our dataset with linear probing. This obtains relatively strong performance, suggesting the efficacy of facial representations for PD diagnosis. In both the binary and multiclass settings, our proposed attention-based model outperforms the pretrained VGGFace baseline on most metrics. + +Main results: Multimodal PD classification. We additionally provide baseline results for the other modalities as described in Section 5. From Table 3, we note that either modality alone ends up being considerably weaker than our primary modality of facial expression, and the fusion result achieves better performance in both the multiclass and binary settings. Our results primarily serve to demonstrate the potential for achieving better performance from the additional modalities in tandem with facial expressions, despite their weaker performance when used in isolation. + +Main results: PD progression synthesis. The synthesis results of various baselines are shown in Table 4 and Figure 5. FID represents the consistency of the image translation. The CLS score and direction score focus on the semantic change in different aspects. Generally speaking, generation-based GANs often obtain balanced results on CLS and direction metrics. However, they fail to achieve low FID, since the image needs alignments to pretrained data. Translation-based methods can often get high consistency, but are easy to focus on other modifications rather than PD progression. The diffusion models achieve a significantly higher CLS score; however, they tend to overfit the training data and create faces that are similar to training images but do not retain the facial features of the test image. Thus, they will largely change the faces in test images, resulting in a low direction score. Our extensive comparisons reveal the substantial limitations of state-of-the-art generative models in extracting very fine-grained and subtle information for image translation, further highlighting the importance of our benchmark. + +Analysis: Improvements are robust to different dataset splits. In Table 5, we present the results when we train the facial expression model on a modified training and test split, where we ensure there is a PD-positive patient only represented in the test split. We report the average multiclass results of three such leave-one-subject-out splits. In these new splits, we still observe that our method outperforms the baseline under the F1 and AUROC metrics. In the leave-one-subject-out setting, the AUROC of our method improves by 0.05 and F1 improves by 0.01. This improvement is similar to our original setting (AUROC by 0.06 and F1 by 0.02). Please note that similar to the main results in Table 2, the Top-1 accuracy is not indicative of model performance, because of the presence of a large amount of negative samples. + +![](images/4dcebc881c66c6e752413cafc5e371915efaac849d02408317a5096aad8bc383.jpg) +Healthy + +![](images/fd0865c7c523ff12414de821e07d0063496b650554cc4c232691829f7686033c.jpg) +ADA + +Few-Shot +DF + +![](images/617e1646d1987e863ce1ab0fe7f372401378fcd7e061491a0099488adfba62b6.jpg) +JoJoGAN +$\therefore m = \frac{3}{11}$ +HRFAE +Figure 5: Qualitative comparisons on PD progression synthesis. The diffusion model yields superior results; however, it may inadvertently introduce inaccurate alterations to the face. + +![](images/046bef890124207ebbda837aca393fd7c694b688c10c186519a975d9801f126c.jpg) + +![](images/05982fd512bd4f4ec58757958a87066d14fd363ebcc93c98c50807d65115c157.jpg) +CycleGAN +CUT +Stable Di + +![](images/c64e01c15b0046b882216d189d954288bf7310dc5588355cf955fd2abc065c77.jpg) +PD Positive + +
ModelTop-1 Acc ↑F1 ↑AUROC ↑
VGGFace [22]69.31/71.290.42/0.450.79/0.81
Ours68.31/77.220.52/0.610.92/0.96
+ +Table 6: Binary classification results on real-world patients in the speech test set and the facial activation set [23]. Our method consistently outperforms baselines on real-world PD data. + +
ModelTop-1 Acc ↑F1 ↑AUROC ↑
AffectNet-8 [31]76.000.450.86
VGGFace83.560.560.86
+ +Table 8: Ablation study on facial expression classification pretraining, in terms of binary classification with the AffectNet-8 backbone. We observe a decrease in performance compared with when using our facial recognition pretraining. + +
ModelBinary Acc ↑Multiclass Acc ↑
I3D [10]70.6045.64
C2D [46]69.0540.96
X3D-XS [13]74.1742.77
SlowFast-8x8 [14]63.8144.52
VGGFace83.5678.20
+ +Table 7: Ablation study on action recognition pretraining. We observe significantly lower performance than using our facial recognition pretraining. + +
ModelTop-1 Acc ↑F1 ↑AUROC ↑
Video / Temporal Attention77.140.490.91
Region / Spatial Attention82.860.560.90
Ours88.000.590.92
+ +Table 9: Ablation study on binary classification results using various attention schemes. Our spatial-temporal attention performs the best. + +Generalizability: From YouTube to real patients. From a healthcare perspective, a crucial and natural question arises: Do the models developed in our YouTubePD benchmark generalize to real-world PD diagnosis? To answer this question, we train our model on YouTubePD and test it on a private dataset of clinical PD patients [23]. This dataset consists of two sets of videos, a speech test of 18 PD-positive and 83 PD-negative patients, and a facial activation test of 19 PD-positive and 85 PD-negative patients. From Table 6, our proposed method has higher F1 and AUROC scores than the baseline on this dataset. In addition, the t-SNE [43] visualizations in Figure 11 of the Appendix show that the PD-positive and PD-negative samples are better separated for our approach. These experimental results validate the efficacy of our benchmark for real-world applications. + +Ablations. We conduct ablation studies to investigate the design choices and components of our proposed facial-expression-based PD classification method. These studies specifically focus on understanding the benefits of different types of pretraining and attention for PD classification. + +Action recognition pretraining. We investigate the transferability of generic video representations to PD classification. We finetune various action recognition architectures pretrained with Kinetics400 [10] on YouTubePD with linear probing. From Table 7, we observe much lower performance compared with generic facial representations from VGGFace [36], despite using a simpler architecture. + +Facial expression classification pretraining. We also investigate the use of a backbone pretrained for facial expression classification, a more fine-grained task than facial recognition. We use a ResNet50 pretrained on AffectNet-8 [31] as our backbone, and finetune it on YouTubePD with linear probing. From Table 8, we observe lower performance with the AffectNet-8 backbone, suggesting that generic facial representations transfer more easily to PD classification. + +Attention types. We investigate the use of attention over regions and attention over frames. This is combined in our proposed facial-expression-based method. From Table 9, we observe similar overall performance for either temporal or spatial attention and improved performance when combined. + +# 7 Discussion + +Clinical impact. The long-term goal of our work is to develop a Parkinson's Disease (PD) early screening tool to aid primary clinicians in the earlier recognition of persons exhibiting physical signs + +that may indicate an evolving Parkinson's syndrome. Those persons may then undergo additional neurological evaluation as clinically indicated. This will help patients to be diagnosed and treated sooner. Many patients with Parkinson's, in retrospect, exhibited subtle signs of Parkinson's for months or even years. The signs were not recognized by the patient, their family, or primary clinicians and the signs instead were attributed to aging, and diagnosis was delayed. We hope that our effort could eventually inspire such a PD early screening tool to detect PD symptoms at an early stage, and the patients could get timely treatments. + +Annotation reliability. In the construction of our dataset, we made use of annotation information beyond what a single annotator provided. Since we did not construct the dataset completely from scratch but repurposed existing YouTube videos, we explicitly utilized the topic of these YouTube videos and their meta-information. Therefore, we precisely know which videos are PD-positive and which ones are negative, and there will very unlikely be mistakes in the annotation of video-level PD labels. In addition, the longitudinal meta-information associated with the YouTube videos provides useful prior knowledge about the PD severity. Empirically, models trained on our annotated dataset generalize to real-world clinical PD data as shown in Table 6. Our method achieves an F1 score that is $10\%$ and $16\%$ higher than the baseline on two different sets of real patient data, respectively. These results reflect that our annotation should be reliable. + +Limitations and future work. The dataset presented in our benchmark possesses its own caveats, particularly in relation to its small size and limited diversity due to the constraint of publicly available videos. While our benchmark serves as a strong first step, there is still a need for further comprehensive datasets and benchmarks to establish thorough and holistic evaluation of models developed for PD analysis. In addition, an important question remains regarding the generalizability of models developed on our benchmark to critical, real-world medical applications across various settings. Further exploration and analysis of this aspect are necessary to advance towards the actual deployment of these models in clinical scenarios. + +Beyond providing the dataset, our contribution includes the establishment of a crucial protocol for collecting a PD dataset. This ensures that our dataset can be readily expanded in the future with more public figures or individuals from additional geographical regions and across different social media platforms other than YouTube. We hope that our work will inspire efforts to create larger PD benchmarks with more annotators. We discuss our limitations and future work in more detail in Section E of the Appendix. + +Ethics. Our research involves the analysis of videos featuring public figures with PD. The videos, derived from YouTube, were publicly shared by figures who had willingly discussed their PD condition. To ensure ethical considerations, we sought explicit consent from these figures. In addressing concerns of data privacy, the research protocol was reviewed and approved by the Institutional Review Board (IRB) at University of Illinois Urbana-Champaign (IRB approval number: 24426). While we acknowledge the potential biases and limitations in solely relying on facial or audio information for PD diagnosis, our main objective is to inspire tools for early detection using easily accessible media like webcams. This initiative does not negate the need for comprehensive medical diagnostics. We stress that our efforts aim to further the understanding of PD and its facial expression impacts. The collected data are shared with researchers who possess ethical training and commit to adhering to high standards. All data are hosted on a dedicated and secure platform, and the code is made available on GitHub. No conflicts of interest exist among the study's contributors. More discussion on the ethical aspect of YouTubePD is included in Section F of the Appendix. + +# 8 Conclusion + +We introduce YouTubePD, the first publicly available multimodal benchmark for Parkinson's Disease (PD) analysis. Within this framework, we propose several important discriminative and generative tasks along with corresponding baselines, showcasing how a range of modern machine learning and computer vision techniques is leveraged and extended to advance AI-assisted systems for automated PD diagnosis. Furthermore, we demonstrate that models trained on YouTubePD perform well on clinical data, indicating potential for real-world medical applications. Notably, the methodology and protocol developed in constructing YouTubePD can be adapted to address other healthcare problems, where a public, standardized benchmark is missing. We encourage further exploration in this direction and hope that our benchmark will facilitate a more seamless transfer of advancements in machine learning and computer vision to impactful medical research and clinical applications. + +# Acknowledgements + +This work was supported in part by the Jump ARCHES endowment through the Health Care Engineering Systems Center, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign through the NCSA Fellows program, NSF Grant 2106825, and NIFA Award 2020-67021-32799. We thank Minh Do and the team for providing real-world Parkinson's patient data. + +# References + +[1] A tool for evaluating facial expression for early diagnosis of Parkinson's disease. The Michael J. Fox Foundation for Parkinson's Research | Parkinson's Disease, 2018. 2 +[2] Parkinson's disease: Challenges, progress, and promise. National Institute of Neurological Disorders and Stroke, 2023. 1 +[3] Avner Abrami, Steven A. Gunzler, Camilla Kilbane, Rachel Ostrand, Bryan Ho, and Guillermo A Cecchi. Automated computer vision assessment of hypomimia in Parkinson disease: Proof-of-principle pilot study. Journal of Medical Internet Research, 2021. 2, 3 +[4] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. In Conference on Neural Information Processing (NeurIPS), 2020. 17, 18 +[5] Andrea Bandini, Silvia Orlandi, Hugo Jair Escalante, Fabio Giovannelli, Massimo Cincotta, Carlos Alberto Reyes-Garcia, P. Vanni, Gaetano Zaccara, and Claudia Manfredi. Analysis of facial expressions in Parkinson's disease through video-based automatic methods. Journal of Neuroscience Methods, 2017. 2, 3 +[6] Achraf Benba, Abdelilah Jilbab, Ahmed Hammouch, and Sara Sandabad. Voiceprints analysis using MFCC and SVM for detecting patients with Parkinson's disease. In International Conference on Electrical and Information Technologies (ICEIT), 2015. 17, 18 +[7] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In International Conference on Machine Learning (ICML), 2021. 6 +[8] Matteo Bologna, Giovanni Fabbrini, Luca Marsili, Giovanni Defazio, Philip D Thompson, and Alfredo Berardelli. Facial bradykinesia. Journal of Neurology, Neurosurgery & Psychiatry, 2013. 2 +[9] Dawn Bowers, Kimberly M. Miller, Wendelyn Bosch, D. Gokcay, Otto Pedraza, Utaka S. Springer, and Michael S. Okun. Faces of emotion in Parkinson's disease: Micro-expressivity and bradykinesia during voluntary facial expressions. Journal of the International Neuropsychological Society, 2006. 3 +[10] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? A new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 9 +[11] Min Jin Chong and David Forsyth. JoJoGan: One shot face stylization. In European Conference on Computer Vision (ECCV), 2022. 5, 7, 8, 17 +[12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. 2 +[13] Christoph Feichtenhofer. X3D: Expanding architectures for efficient video recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 9 +[14] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 9 +[15] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015. 5 +[16] Christopher G Goetz, Stanley Fahn, Pablo Martinez-Martin, Werner Poewe, Cristina Sampaio, Glenn T Stebbins, Matthew B Stern, Barbara C Tilley, Richard Dodel, Robert Holloway Bruno Dubois, Joseph Jankovic, Jaime Kulisevsky, Anthony E Lang, Andrew Lees, Sue Leurgans, Peter A LeWitt, David Nyenhuis, C Warren Olanow, Olivier Rascol, Anette Schrag, Jeanne A Teresi, Jacobus J Van Hilden, and Nancy LaPelle. Movement disorder society-sponsored revision of the unified Parkinson's disease rating scale MDS-UPDRS: Scale presentation and clinimetric testing results. In Movement Disorders, 2008. 3, 4, 15 + +[17] Luis F. Gomez, Aythami Morales, Juan R. Orozco-Arroyave, Roberto Daza, and Julian Fierrez. Improving Parkinson detection using dynamic features from evoked expressions in video. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2021. 2, 3, 5 +[18] Luis Felipe Gomez-Gomez, Aythami Morales, Julian Fierrez, and Juan Rafael Orozco-Arroyave. Exploring facial expressions and affective domains for Parkinson detection. PLOS One, 2020. 3, 5, 6 +[19] Athina Grammatikopoulou, Nikos Grammalidis, Sevasti Bostantjopoulou, and Zoe Katsarou. Detecting hypomimia symptoms by selfie photo analysis: For early Parkinson disease detection. In ACM International Conference on Pervasive Technologies Related to Assistive Environments, 2019. 2, 3 +[20] Helmi Hadi and Caroline M. Wilkinson. Categorizing facial creases: A review. Journal of Cosmetic Dermatology, 2017. 4 +[21] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision (ICCV), 2017. 6 +[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6, 7, 8, 9, 18 +[23] Trung-Hieu Hoang, Mona Zehni, Huaijin Xu, George Heintz, Christopher Zallek, and Minh N. Do. Towards a comprehensive solution for a vision-based digitized neurological examination. IEEE Journal of Biomedical and Health Informatics, 2022. 9, 18, 19 +[24] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations (ICLR), 2022. 8 +[25] Bo Jin, Yue Qu, Liang Zhang, and Zhan Gao. Diagnosing Parkinson disease through facial expression recognition: Video analysis. Journal of Medical Internet Research, 2020. 2, 3, 5 +[26] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. In Conference on Neural Information Processing (NeurIPS), 2020. 7, 8, 17 +[27] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 7 +[28] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. 17 +[29] Megvii Inc. Face++ Research Toolkit. http://www faceplusplus.com, 2013. 3, 4 +[30] Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations (ICLR), 2022. 5, 8 +[31] Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor. AffectNet: A new database for facial expression, valence, and arousal computation in the wild. IEEE Transactions on Affective Computing, 2017. 9 +[32] Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, and Kunio Kashino. Masked spectrogram modeling using masked autoencoders for learning general-purpose audio representation. In HEAR: Holistic Evaluation of Audio Representations (Neural Information Processing Systems 2021 Competition), Proceedings of Machine Learning Research, 2022. 7, 17, 18 +[33] Michal Novotny, Tereza Tykalova, Hana Ruzickova, Evžen Ruzicka, Petr Dusek, and Jan Rusz. Automated video-based assessment of facial bradykinesia in de-novo Parkinson's disease. npj Digital Medicine, 2022. 2, 3 +[34] Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A Efros, Yong Jae Lee, Eli Shechtman, and Richard Zhang. Few-shot image generation via cross-domain correspondence. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 7, 8, 17 +[35] Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for unpaired image-to-image translation. In European Conference on Computer Vision (ECCV), 2020. 5, 8, 17 +[36] Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In British Machine Vision Conference (BMVC), 2015. 5, 6, 9 + +[37] Lucia Ricciardi, Matteo Bologna, Francesca Morgante, Diego Ricciardi, Bruno Morabito, Daniele Volpe, Davide Martino, Alessandro Tessitore, Massimiliano Pomponi, Anna Rita Bentivoglio, Roberto Bernabei, and Alfonso Fasano. Reduced facial expressiveness in Parkinson's disease: A pure motor disorder? Journal of the Neurological Sciences, 2015. 3 +[38] Lucia Ricciardi, Federica Visco-Comandini, Roberto Erro, Francesca Morgante, Matteo Bologna, Alfonso Fasano, Diego Ricciardi, Mark J. Edwards, and James M. Kilner. Facial emotion recognition and expression in Parkinson's disease: An emotional mirror mechanism? PLoS ONE, 2017. 3 +[39] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 8, 17 +[40] Gwenda Simons, Marcia C. Smith Pasqualini, Vasudevi Reddy, and Julia Wood. Emotional and nonemotional facial expressions in people with Parkinson's disease. Journal of the International Neuropsychological Society, 2004. 3 +[41] Ge Su, Bo Lin, Jianwei Yin, Wei Luo, Renjun Xu, Jie-Song Xu, and Kexiong Dong. Detection of hypomimia in patients with Parkinson's disease via smile videos. Annals of Translational Medicine, 2021. 2, 3 +[42] Cem Subakan, Mirco Ravanelli, Samuele Cornell, Mirko Bronzi, and Jianyuan Zhong. Attention is all you need in speech separation. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021. 4 +[43] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. In Journal of Machine Learning Research, 2008. 9, 18 +[44] Nomi Vinokurov, David Arkadir, Eduard Linetsky, Hagai Bergman, and Daphna Weinshall. Quantifying hypomimia in Parkinson patients using a depth camera. In International Symposium on Pervasive Computing Paradigms for Mental Health, 2015. 3 +[45] Josefine Waldthaler, Charlotte Krüger-Zechlin, Lena Stock, Zain Deeb, and Lars Timmermann. New insights into facial emotion recognition in Parkinson's disease with and without mild cognitive impairment from visual scanning patterns. Clinical Parkinsonism & Related Disorders, 2019. 3 +[46] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 9 +[47] Peng Wu, Isabel Gonzalez, Yorgos Patsis, Dongmei Jiang, Hichem Sahli, Eric Kerckhofs, and Marie Vandekerckhove. Objectifying facial expressivity assessment of Parkinson's patients: Preliminary study. Computational and Mathematical Methods in Medicine, 2014. 3 +[48] Liqiong Yang, Xiangling Chen, Quanhao Guo, Jing Zhang, Man Luo, Xiaqing Chen, Yanxia Wen, Xianwei Zou, and Fan Xu. Changes in facial expressions in patients with Parkinson's disease during the phonation test and their correlation with disease severity. Computer Speech Language, 2022. 2, 3 +[49] Xu Yao, Gilles Puy, Alasdair Newson, Yann Gousseau, and Pierre Hellier. High resolution face age editing. In International Conference on Pattern Recognition (ICPR), 2021. 5, 8, 17 +[50] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 5, 8, 17 + +# Appendix + +In this appendix, we include (1) discussion regarding our YouTubePD benchmark detail, release, and licensing, (2) additional analysis of the dataset, (3) additional illustration and details about our methods, (4) additional experimental results and analysis, (5) more discussion on limitations and future work, and (6) potential negative societal impact. + +# A Benchmark Detail, Release, and Licensing + +We provide our code and benchmark at https://uiuc-yuxiong-lab.github.io/YouTubePD. We release our benchmark under the CC0 license. Here, we describe how we publicly release our benchmark: + +1. We include all the code used in the process of converting publicly available YouTube videos into our benchmark. +2. We include all our annotations and extracted landmarks. Note that offensive content is not included in our dataset, as all the sources are publicly available interviews of public figures speaking openly about their experiences with Parkinson's Disease. + +Despite that all the videos used in our benchmark are publicly available YouTube videos, we are also actively taking steps to approach the public figures involved to respect their autonomy and privacy. This ensures that we uphold the highest standards of ethical data usage. We strive to balance open access and reproducibility with respect for privacy, all while providing a resource that could significantly advance the analysis and understanding of Parkinson's Disease (PD). + +# B Dataset: Additional Analysis + +# B.1 Dataset Statistics + +
Region/Severity012345
Overall Video18715810177
Forehead18712341618
Left nasolabial fold18796102010
Right nasolabial fold18711710218
Right lip crease18710373
Left lip crease187102103
Left outer eye187421617
Right outer eye187222516
Between eyebrows18751353
Right above between eyebrows187334125
Right eye187154386
Left eye187164386
Mouth18710150
Right eyebrow1876581112
Left eyebrow1876471012
Total Annotations28081074971161126
+ +Table 10: Distribution of severity labels in YouTubePD for the overall video-level analysis and for all 14 facial regions, with 0 denoting the absence of PD and 5 indicating a severe form of PD. + +
CountryCountRaceCountGenderCount
United States12White/Caucasian13Male15
United Kingdom3Black/African descent3Female1
Canada1
+ +Table 11: Country/race/gender statistics of PD-positive public figures in YouTubePD. + +
CountryCountRaceCountGenderCount
South Africa1South African+Swiss-German1Male68
United States52Black/African Descent+Filipino1Female23
United Kingdom12Black/African Descent11
Israel3White/Caucasian63
Russia/Canada1Indian1
Sweden2Latinx4
Kenya/Mexico2Asian9
Brazil2Black/African Descent+Samoan1
Serbia1
South Korea2
Puerto Rico1
Mexico1
Japan1
Canada4
Denmark1
Russia2
Germany1
France2
+ +Table 12: Country/race/gender statistics of healthy control or PD-negative public figures in YouTubePD. + +In Table 10, we summarize the severity label distribution in YouTubePD. This includes severity labels for the overall subject in each video and severity labels for each of the 14 important facial regions informative for PD analysis. The number of annotations varies depending on severity levels and regions. + +We also summarize the demographic distribution in YouTubePD, split between PD-positive and healthy control (HC), or PD-negative, subjects. Table 11 provides the country, race, and gender statistics of PD-positive subjects in YouTubePD. Similarly, Table 12 provides the country, race, and gender statistics of HC subjects in YouTubePD. + +We would like to provide additional details in our annotation process, particularly regarding how we denote the severity of PD. Our annotation strategy utilizes a detailed scale, ranging from 0 to 5, where 0 signifies a healthy individual, and 5 corresponds to severe PD. We do not apply the Unified Parkinson's Disease Rating Scale (UPDRS) [16] for facial expression. This decision is based on the clinician's suggestion, since an accurate UPDRS facial expression rating would require more information (e.g., observing the subject's facial expression pattern at rest or when not talking) than facial expression videos contain. This strategy also allows for a finer classification. In addition, we do not apply UPDRS because facial expression and audio have distinct UPDRS standards. We instead use the holistic severity and confidence annotation based on the video. Doing so ensures the label consistency between the audio data and other modalities, thereby facilitating multimodal PD classification. + +In addition, we provide (i) illustrations of our facial landmarks and regions in detail in Figures 6, 7, and 8; (ii) the longitudinal statistics of our PD videos showing their distribution in time in Figure 9. + +# B.2 Are Annotated Regions Correlated to PD Severity? + +To understand if the annotated regions are informative for PD severity, we investigate the correlation between the 14 annotated facial regions and a patient's annotated PD severity level. We accomplish this by training a linear classifier that takes as input the annotated informative region index and predicts the severity level. More specifically, for each video, the input is a binary vector which maps the clinician-annotated facial region indices, while the output is matched to the annotated severity level of the video. As a control experiment, we instead train on random region-level annotations as input. We find that the linear model trained on the actual region annotation achieves $70\%$ test accuracy, while the model trained on random annotations achieves $15\%$ test accuracy. This validates that the severity level is predictable from the annotated informative regions, but not from random region annotations. Therefore, the annotated regions and PD severity are correlated. This is also + +![](images/62170450a5bb11fa77b9e10c2ab6ea7f3ef89f4720e5a0e8f46241dc1368f1f9.jpg) +Figure 6: Original landmark extraction. + +![](images/4e9e8b118eb18459b8e40cb614b9f564b52e6a17d0d14910dd090f705ecae7a1.jpg) +Figure 7: Interpolated landmarks. + +![](images/a08e72acd213e41f843757845dd05f6849d42a3b4eab0564a8a6c0629de1a2b8.jpg) +Figure 8: Visualized regions from interpolated landmarks. + +![](images/507f79159e89bf0e08f134c6d1c5194de55a8ed040a230063f17c149e358c237.jpg) +Figure 9: Longitudinal data for the time gap between PD-negative and PD-positive videos. The x-axis represents public figures, while the y-axis represents the time in years. Red dots denote videos with PD-positive labels, and blue dots denote videos with PD-negative labels. + +consistent with our approach that leverages region-level information for improved PD classification performance. + +Furthermore, we can examine the learned weights and biases of the linear model to understand how the model has learned to classify. We find that in general, higher severity patients have positive annotations on a larger number of informative regions (more symptoms), and vice versa. The model also leverages different facial regions to determine severity at different levels; for example, very severe cases could be easily distinguished via eyes and lips feature. Figure 3 in the main paper visualizes the most informative regions at each severity level. Corresponding regions are indicated by highlighted facial creases in the figure and bounding boxes in the input video frames. + +# C Methods: Additional Illustration and Details + +# C.1 Model Architecture Illustration + +We provide an additional illustration of our baseline method for the first task in Figure 10. An input video is processed through two branches, one for video features and the other for region features. These features are then aggregated with a spatial-temporal attention classifier to obtain the PD classification result. + +![](images/6320461916b5cbb8b10a3f08b9574f5b073a107a9b9c7bc639bea99c9bd41093.jpg) +Figure 10: Illustration of our baseline method for the facial expression-based PD classification. An input video is processed through two branches, one for video features and the other for region features. These features are then aggregated with a spatial-temporal attention classifier to obtain the PD classification result. + +# C.2 Implementation Details + +For the facial-expression-based classification method, we use $\lambda = 0.9$ , emphasizing the binary portion of the loss. We use all 14 regions, i.e., $|R| = 14$ . We use 8 frames from each video and a batch size of 32. We train our models with the Adam [28] optimizer with a learning rate of 0.001. More details are provided in the code. All experiments corresponding to this task are conducted on a single 16GB NVIDIA V100 GPU. + +For the audio baseline, we utilize the entire audio and average the representations to obtain a single 768-dimensional audio-level feature vector. Afterwards, we use a batch size of 64 and feed the input through an MLP with one hidden layer of size 1,024. We train the model using Adam with a learning rate of 0.0003. Similarly, for the landmark baseline, we use 8 frames to maintain consistency with the other modalities, and use the same hyperparameters for Adam. For both modalities, we use $\lambda = 0.5$ . All experiments are conducted on a single NVIDIA 3060 GPU. + +For the multimodal fusion baseline, we use 8 frames from each video. The batch size is set as 16 and trained for 50 epochs. We train the model using Adam with a learning rate of 0.02. All experiments are conducted on a single NVIDIA 4090 GPU. + +For PD progression synthesis, we follow the settings used in the official implementations [11, 26, 34, 35, 39, 49, 50]. In addition, we make modifications to two methods to align with our task setting. Specifically, we replace the age classifier in HRFAE [49] with our trained PD binary classification model. As for JoJoGAN [11], we iteratively sample images during training to utilize all available images. All experiments are conducted on a single NVIDIA 4090 GPU. + +# D Experiments: Additional Results and Analysis + +# D.1 Additional Ablations + +Alternative audio representations. We additionally explore alternative feature representations for audios beyond masked auto-encoders (MAE) [32] presented in the main paper – namely, wave2vec (W2V) 2.0 [4], a deep feature extractor for audios that also uses self-supervised learning, and MEL-frequency cepstral coefficients (MFCC) [6], which have demonstrated reasonable success in general audio processing tasks. We find empirically that MAE features work the best with regards to metrics and consistency, as shown in Table 13. In general, wav2vec features remain competitive with the MAE features though trailing slightly, while MFCC performs considerably worse likely due to its inability to express complex features necessary for the task, given the simplicity of the classification model. Note that the hand-crafted MFCC achieves a relatively high performance on AUROC, since it effectively makes random guess among all classes. The other learned features are more biased by the data imbalance between 0 and all other classes. This is due to the fact that AUROC is computed as an + +unweighted average of one-vs-all calculations, leading to MFCC to incorrectly appear competitive on that metric. + +
Audio FeatureTop-1 Acc ↑F1 ↑AUROC ↑MSE ↓
MAE [32]47.79(±1.66)0.16(±0.01)0.50(±0.02)6.26 (±0.39)
W2V [4]57.90(±7.46)0.14(±0.02)0.43(±0.04)4.50(±0.93)
MFCC [6]39.42(±9.66)0.11(±0.03)0.49(±0.06)6.48(±1.80)
+ +Multimodal fusion strategies. We conduct an ablation study on the multimodal fusion strategy. Specifically, we explore two different strategies to produce the logits for the facial expression video and facial landmark modalities. One way is frame concatenation (FC) where we concatenate the frame features, and the other is frame voting (FV) where we perform voting to aggregate the result of each frame. Note that FV is used for results in Table 3. For FC, we concatenate frames' features to one vector representing the whole video (either image features from ResNet or landmark coordinates). Then, we train a video-level classifier to obtain the video logits. For FV, we train a frame-level classifier for each frame and average the predictions as video-level logits. For both strategies, the video-level logits from different modalities are averaged to get the final prediction. As shown in Table 14, with FC, the multimodal performance is even lower than the single facial expression modality on F1 and AUROC metrics. By contrast, the FV strategy helps to improve performance. + +Table 13: Ablation study on audio representations for multiclass classification on YouTubePD. 'MAE' denotes masked auto-encoders presented in the main paper; 'W2V' denotes wave2vec 2.0; 'MFCC' denotes MEL-frequency cepstral coefficients. We find that MAE provides the most stable and consistent results – we prioritize F1 and AUROC, as the other metrics are influenced by the data imbalance. + +
Audio FeatureTop-1 Acc ↑F1 ↑AUROC ↑MSE ↓
VGGFace [22]78.20(±3.13)0.23(±0.02)0.68(±0.01)2.29(±0.77)
Multimodal (FC)79.23(±1.94)0.21(±0.02)0.69(±0.01)1.76(±0.18)
Multimodal (FV)82.75(±2.85)0.28(±0.02)0.80(±0.03)1.40(±0.25)
+ +Table 14: Ablation study on multimodal fusion strategies for multiclass classification on YouTubePD. 'FC' denotes frame concatenation, and 'FV' denotes frame voting. We find that the frame voting strategy improves the fusion performance, while the frame concatenation strategy even leads to a decrease in performance on F1 and AUROC metrics, compared with the single facial expression modality. + +# D.2 t-SNE Visualizations + +We provide qualitative results for the performance of our facial-expression-based classification model. We use t-SNE [43] on both YouTubePD and the private clinical dataset [23], shown in Figure 11. Our approach is able to clearly separate the PD-positive and PD-negative classes on both distributions. + +# E More Discussion on Limitations and Future Work + +Limitations. In the main paper, we have briefly discussed the limitations of our work. Here, we provide a more in-depth discussion. First, as our dataset is composed of publicly available YouTube videos of public figures, the subjects and video samples in the dataset may not adequately represent the wide range of individuals affected by PD. The videos primarily capture interview scenarios, which may not effectively showcase the indicative symptoms of subjects, compared with the motor tasks and instructions typically used in medical studies. Furthermore, there is a demographic bias in the dataset subjects, as they are all public figures (predominantly male) with very few available details about their treatment course and disease progression. Meanwhile, we are not aware of the treatment or treatment response experienced by these individuals. + +Second, it is necessary to conduct further investigation and analysis of the performance and deployment of models developed using our benchmark in real-world clinical scenarios. In the main paper, we have demonstrated that our method developed on the benchmark exhibits promising results on a clinical dataset. More comprehensive evaluation on additional clinical datasets would validate the broad generalizability of our benchmark and associated models to practical medical applications. + +![](images/b299d75359ce36adc81110589b2f734a27b63574f37d19d3fb13175e17419712.jpg) +(a) t-SNE visualization for YouTubePD. + +![](images/81d1d7bd941a9bde5ea40edaed00513bcbf8c15622865be715cb8489f2c77254.jpg) +(b) t-SNE visualization for the facial activation set with clinical patients. +Figure 11: t-SNE visualizations of our learned facial expression representation for healthy (class 0) and PD-positive (class 1) subjects on YouTubePD and the clinical dataset [23]. Intriguingly, we observe a more pronounced separation between the PD-positive and PD-negative classes on clinical data, demonstrating the generalizability of our learned representation from the in-the-wild YouTube videos. + +Finally, the size of the dataset is relatively small compared with typical computer vision datasets, due to the inherent challenges involved in collecting PD data. The small dataset size increases the difficulty of developing and training larger models from scratch, often necessitating some form of finetuning to achieve reasonable performance. + +Future work. These limitations open up a wide scope of future advancements and progress in this field. As mentioned previously, while our benchmark represents a strong first step, further comprehensive datasets and benchmarks are necessary to thoroughly evaluate the performance and generalizability of methodologies prior to their deployment. Moreover, our findings highlight the potential of developing multimodal frameworks that leverage various examination modalities and track complementary symptoms, such as facial expression, speech, posture, and gait for PD classification. Although PD classification has been the primary focus (as in our first two proposed tasks), we note that there are interesting unexplored directions in this realm, particularly in generative tasks like progression synthesis (as in our third proposed task), which can serve as effective augmentation and learning techniques. + +# F Ethics Discussion + +# F.1 Personally Identifiable Information and Informed Consent + +YouTubePD may include personally identifiable information (PII) or sensitive personally identifiable information. The data we collect from YouTube include facial expressions, PD identity, and audios. However, we would like to highlight that the public figures chose to make their struggles with PD public and discussed their disease and diagnosis in front of cameras. By willingly revealing their identifiable faces and voices, the public figures do not intend to keep their PD information fully private. We believe that the concern regarding a breach of privacy is not a newly raised issue specific to our work, as the possibility of any misuse of these videos already exists. + +The central question we posed to ourselves was whether sharing these videos with our research community, along with annotations of facial expressions, would amplify the risk of misuse. We are of the opinion that this action does not escalate the aforementioned risk. To further address this matter, we took the initiative to contact the public figures involved and requested permission. This step was taken proactively, particularly in the event that the public figures had regrets about their previous decision to go public and now wished to make a different choice. + +Regarding PII and sensitive PII, we are fully aware of the sensitive nature of the data we are working with. In order to safeguard individuals' privacy, we have sought both guidance from the Institutional Review Board (IRB) Office and legal guidance from the Legal Department at University of Illinois Urbana-Champaign to ensure compliance with regulations. Furthermore, in line with ethical norms, we have made efforts to obtain explicit consent from each public figure featured in the videos. The consent form clearly includes our intention in using these data and how these data are expected to be used. We remove videos of public figures who wish not to be part of our dataset. In addition, we acknowledge the concern about the potential for individuals and their families to feel uncomfortable with the label of "illness." While we respect this sensitivity, we emphasize that our intention is to contribute to a better understanding of PD, its impact on facial expressions, facial landmarks, and audiios, and the potential for technological advancements. We approach this research with the utmost respect for the individuals involved and strive to contribute positively to the discourse around the disease. + +# F.2 Negative Societal Impact + +While our work provides promising advancements in AI-assisted analysis and severity evaluation of PD, we recognize that it may also present potential negative societal impacts that deserve careful consideration. + +The first concern pertains to privacy. The videos we use for our work are publicly available, featuring public figures who openly discuss their experience with PD. However, widespread use of similar technology could raise issues of privacy, as individuals may not wish to have their health condition detected or revealed, even inadvertently, through casual video or audio footage. As healthcare professionals and researchers, it is critical to respect patient privacy and consent in all facets of care and study. Second, there is a risk of misuse or over-reliance on our technology. While the goal of our work is to aid the detection of PD, it should not replace the professional diagnosis of healthcare providers. Misinterpretation or misuse of this technology may lead to false positives or negatives, causing unnecessary distress or false reassurance. Lastly, issues of inequity may also arise. Access to advanced diagnostic tools such as the one we propose may be limited, due to geographic location, financial constraints, or digital literacy. As such, this technology could inadvertently widen the healthcare disparity between different socioeconomic groups. + +In light of these potential societal impacts, it is essential that proper protocols and measures are put in place to guide the ethical use of such technologies. This includes clear communication about the tool's intended use, rigorous validation processes, and ongoing dialogue about equitable access to and use of these technological advances. + +# F.3 Mitigating Bias and Negative Societal Impacts + +Some ethical risks exist in the originally publicly available YouTube videos. We are aware that we cannot mitigate those risks to zero. There will be a rest risk. Again, we emphasize that our intention is to enhance the better understanding of PD, its effects on facial expressions, facial landmarks, and audios, as well as explore the potential for technological advancements in this field. We aim to ensure the highest possible standards of ethical conduct in downstream research. + +# F.4 Responsibility of AI-Assisted Systems + +As mentioned in Section 7 of the main paper, our benchmark aims to inspire a PD early screening tool based on modern machine learning and computer vision techniques. This tool would assist primary clinicians in identifying individuals who may be displaying early physical signs indicative of an evolving Parkinson's syndrome. These individuals can then undergo further neurological evaluation as clinically indicated. This proactive approach will expedite diagnosis and treatment, potentially leading to improved outcomes. On the other hand, while we acknowledge the potential of facial videos and audios in aiding PD detection, we do not advocate for clinicians to rely solely on these modalities for diagnosis. Instead, if positive detection results emerge from facial videos and audios, we recommend that patients seek medical attention at an earlier stage and obtain a more comprehensive diagnosis using additional assessments, such as Dopamine Transporter Scan (DAT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). \ No newline at end of file diff --git a/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/images.zip b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..39f0a8a508ef9de814106a5a6785c7a30875a332 --- /dev/null +++ b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d437eb28fdc2a56f79b3df05fdc95472a2f1ca63cc58ae106605f6c2999a031 +size 658344 diff --git a/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/layout.json b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dbd67c5d8bfa5f9002880933dd3d76c9bee5bc62 --- /dev/null +++ b/youtubepdamultimodalbenchmarkforparkinsonsdiseaseanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bf5564f02c7e9565bf1c1bc3cd06904048a02dc795beac539fe43a7fb8cf02f +size 495649 diff --git a/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_content_list.json b/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b62befa700a501bace1e3f2c62f861d9bb87cc22 --- /dev/null +++ b/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b8ae5f31c0ec4183a1c0205ad386d0bb55aba1c5fa96f897d869419310fb041 +size 166714 diff --git a/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_model.json b/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..46f7575b42158a0c419ccae4169289d0067151b2 --- /dev/null +++ b/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:742a82558431a1a0ed15f004788cb4f23bf46d5da525b531e2d8b8d5bacf4b2a +size 193031 diff --git a/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_origin.pdf b/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2f34a7d1e2e418b23d5fdbf366cc0583f7be2451 --- /dev/null +++ b/zeroonelawsofgraphneuralnetworks/d3c9e0cc-8fba-4e7f-bc20-6d983758029d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4935a9557fb1654a814befb20697c6c4c1594ed4428905982eb42c2ed4bcc5ff +size 6432845 diff --git a/zeroonelawsofgraphneuralnetworks/full.md b/zeroonelawsofgraphneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e3a43baafdcc60261bd2540b4a2f75eddb4283ee --- /dev/null +++ b/zeroonelawsofgraphneuralnetworks/full.md @@ -0,0 +1,841 @@ +# Zero-One Laws of Graph Neural Networks + +Sam Adam-Day* + +Department of Mathematics + +University of Oxford + +Oxford, UK + +sam.adam-day@cs.ox.ac.uk + +Theodor-Mihai Iliant + +Department of Computer Science + +University of Oxford + +Oxford, UK + +theodor-mihai.iliant@lmh.ox.ac.uk + +Ismail Ilkan Ceylan + +Department of Computer Science + +University of Oxford + +Oxford, UK + +ismail.ceylan@cs.ox.ac.uk + +# Abstract + +Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erdős-Rényi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either zero or to one. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs. + +# 1 Introduction + +Graphs are common structures for representing relational data in a wide range of domains, including physical [35], chemical [7, 18], and biological [42, 10] systems, which sparked interest in machine learning over graphs. Graph neural networks (GNNs) [33, 14] have become prominent models for graph machine learning for a wide range of tasks, owing to their capacity to explicitly encode desirable relational inductive biases [5]. One important virtue of these architectures is that every GNN model can be applied to arbitrarily large graphs, since in principle the model parameters are independent of the graph size. This raises the question: how do GNNs behave as the number of nodes becomes very large? When acting as binary classifiers, GNNs can be thought of as parametrizing Boolean properties of (labelled) graphs. A classical method of specifying such properties is through first-order formulas, which allow for precise definitions using a formal language [9]. The celebrated 'zero-one law' for first-order logic [13, 8] provides a crisp answer to the question of the asymptotic behaviour of such properties: as graphs of increasing size are drawn from the Erdős-Rényi distribution, the probability that a first-order property holds either tends to zero or to one. + +In this paper, we show an analogous result for binary classification GNNs: under mild assumptions on the model architecture, several GNN architectures including graph convolutional networks [21] satisfy a zero-one law over Erdős-Rényi graphs with random node features. The principal import of this result is that it establishes a novel upper-bound on the expressive power of GNNs: any property of graphs which can be uniformly expressed by a GNN must obey a zero-one law. An example of a simple property which does not asymptotically tend to zero or one is that of having an even number of nodes. Note however that our result, combined with the manifest success of GNNs in practice, suggests that zero-one laws must be abundant in nature: if a property we cared about did not satisfy a zero-one law, none of the GNN architectures we consider would be able to express it. + +Our main results flexibly apply both to the case where we consider the GNN as a classifier applied to randomly sampled graphs and node features, and where we consider the random node features as part of the model. For the latter, it is known that incorporating randomized features into the model significantly increases its expressive power on graphs with bounded number of nodes [32, 1]. Our results yield the first upper bound on the expressive power of these architectures which is uniform in the number of nodes. We complement this with a corresponding lower bound, showing that these architectures can universally approximate any property which satisfies a certain zero-one law. + +A key strength of the results is that they apply equally well to randomly initialized networks, trained networks, and anything in between. In this sense, our asymptotic analysis is orthogonal to the question of optimization, and holds regardless of the choice of training method. Another interesting aspect of these results is that they unite analysis of expressive power with extrapolation capacity. Our zero-one laws simultaneously provide limits on the ability of GNNs to extrapolate from smaller Erdős-Rényi graphs to larger ones: eventually any GNN must classify all large graphs the same way. + +To validate our theoretical findings, we conduct a series of experiments: since zero-one laws are of asymptotic nature, we may need to consider very large graphs to observe clear empirical evidence for the phenomenon. Surprisingly however, GNNs already exhibit clear evidence of a zero-one law even on small graphs. Importantly, this is true for networks with very few layers (even a single-layer), which is reassuring, as it precludes confounding factors, such as the effect of over-smoothing due to increased number of layers [23]. We provide further experimental results in the appendix of this paper, where all proofs of technical statements can also be found. We make the code for our experiments available online at https://github.com/SamAdamDay/Zero-One-Laws-of-Graph-Neural-Networks. + +# 2 Preliminaries + +Random graphs and matrices. The focus of our study is on classes of random graphs with random features, for which we introduce some notation. We write $\mathbf{x} \in \mathbb{R}^d$ to represent a vector, and $\mathbf{X} \in \mathbb{R}^{d \times n}$ to represent a matrix. Analogously, we write $\mathbf{x}$ to denote a random vector, and $\mathbf{X}$ to denote a random matrix, whose entries are (real) random variables. We write $\mathbb{G}(n,r)$ to denote the class of simple, undirected Erdős-Rényi (ER) graphs with $n$ nodes and edge probability $r$ and let $\mathbb{D}(d)$ denote some distribution of feature vectors over $\mathbb{R}^d$ . We define an Erdős-Rényi graph equipped with random node features as a pair $\mathcal{G} = (\mathbf{A},\mathbf{X})$ , where $\mathbf{A} \sim \mathbb{G}(n,r)$ is the random graph adjacency matrix of the graph $G = (V,E)$ and $\mathbf{X} \in \mathbb{R}^{d \times n}$ is a corresponding random feature matrix, independent of $G$ , which contains, for each node $v \in V$ , an initial random node feature $\mathbf{x}_v \sim \mathbb{D}(d)$ as the corresponding columns of $\mathbf{X}$ .2 + +Message passing neural networks. The focus of this work is on message-passing neural networks (MPNNs) [12, 17] which encapsulate the vast majority of GNNs. The fundamental idea in MPNNs is to update the initial (random) state vector $\mathbf{x}_v^{(0)} = \mathbf{x}_v$ of each node $v$ for $T \in \mathbb{N}$ iterations, based on its own state and the state of its neighbors $\mathcal{N}(v)$ as: + +$$ +\mathbf {x} _ {v} ^ {(t + 1)} = \phi^ {(t)} \left(\mathbf {x} _ {v} ^ {(t)}, \psi^ {(t)} \left(\mathbf {x} _ {v} ^ {(t)}, \{\{\mathbf {x} _ {u} ^ {(t)} | u \in \mathcal {N} (v) \}\}\right)\right), +$$ + +where $\{\cdot\}$ denotes a multiset, and $\phi^{(t)}$ and $\psi^{(t)}$ are differentiable combination, and aggregation functions, respectively. Each layer's node representations can have different dimensions: we denote by $d(t)$ the dimension of the node embeddings at iteration $t$ and typically write $d$ in place of $d(0)$ . + +The final node representations can then be used for node-level predictions. For graph-level predictions, the final node embeddings are pooled to form a graph embedding vector to predict properties of entire graphs. The pooling often takes the form of simple averaging, summing or component-wise maximum. For Boolean node (resp., graph) classification, we further assume a classifier $\mathfrak{C}:\mathbb{R}^{d(T)}\to \mathbb{B}$ which acts on the final node representations (resp., on the final graph representation). + +There exist more general message passing paradigms [5] such as MPNNs with global readout which additionally aggregate over all node features at every layer, and are known to be more expressive [4]. Some model architectures considered in this paper include a global readout component and we consider different choices for the combine $(\phi^{(t)})$ and aggregate $(\psi^{(t)})$ functions, as we introduce next. + +GCN. The primary GNN architecture we consider is graph convolutional networks (GCN) [21]. These are instances of MPNNs with self-loops, which aggregate over the extended neighborhood of a node $\mathcal{N}^{+}(v)\coloneqq \mathcal{N}(v)\cup \{v\}$ . GCNs iteratively update the node representations as $\mathbf{x}_v^{(t)} = \sigma \left(\mathbf{y}_v^{(t)}\right)$ , where the preactivations are given by: + +$$ +\mathbf {y} _ {v} ^ {(t)} = \boldsymbol {W} _ {n} ^ {(t)} \sum_ {u \in \mathcal {N} ^ {+} (v)} \frac {1}{\sqrt {| \mathcal {N} (u) | | \mathcal {N} (v) |}} \mathbf {x} _ {u} ^ {(t - 1)} + \boldsymbol {b} ^ {(t)} +$$ + +We apply the linear transformation $\mathbf{W}_n^{(t)} \in \mathbb{R}^{d(t) \times d(t - 1)}$ to a normalized sum of the activations for the previous layers of the neighbors of the node under consideration, together with its own activation. Adding a bias term $\mathbf{b}^{(t)}$ yields the preactivation $\mathbf{y}_v^{(t)}$ , to which we apply the non-linearity $\sigma$ . + +MEANGNN. We also consider the MEANGNN $^+$ architecture which is a self-loop GNN with mean aggregation and global readout [17], and updates the node representation as $\mathbf{x}_v^{(t)} = \sigma (\mathbf{y}_v^{(t)})$ , where: + +$$ +\mathbf {y} _ {v} ^ {(t)} = \frac {1}{| \mathcal {N} ^ {+} (v) |} W _ {n} ^ {(t)} \sum_ {u \in \mathcal {N} ^ {+} (v)} \mathbf {x} _ {u} ^ {(t - 1)} + \frac {1}{n} W _ {r} ^ {(t)} \sum_ {u \in V} \mathbf {x} _ {u} ^ {(t - 1)} + \boldsymbol {b} ^ {(t)} +$$ + +$\mathbf{MEANGNN}^+$ models additionally apply a linear transformation $\pmb{W}_r^{(t)} \in \mathbb{R}^{d(t) \times d(t-1)}$ to the mean of all previous node representations. We refer to MEANGNN as the special case of this architecture which does not include a global readout term (obtained by dropping the second term in the equation). + +SUMGNN. Finally, we consider the $\mathrm{SUMGNN}^+$ architecture which is a GNN with sum aggregation and global readout [12], and updates the node representations as $\mathbf{x}_u^{(t)} = \sigma (\mathbf{y}_u^{(t)})$ , where: + +$$ +\mathbf {y} _ {v} ^ {(t)} = \boldsymbol {W} _ {s} ^ {(t)} \mathbf {x} _ {v} ^ {(t - 1)} + \boldsymbol {W} _ {n} ^ {(t)} \sum_ {u \in \mathcal {N} (v)} \mathbf {x} _ {u} ^ {(t - 1)} + \boldsymbol {W} _ {r} ^ {(t)} \sum_ {u \in V} \mathbf {x} _ {u} ^ {(t - 1)} + \boldsymbol {b} ^ {(t)} +$$ + +This time, we separate out the contribution from the preactivation of the previous activation for the node itself. This yields three linear transformations $\mathbf{W}_s^{(t)}, \mathbf{W}_n^{(t)}, \mathbf{W}_r^{(t)} \in \mathbb{R}^{d(t) \times d(t-1)}$ . The corresponding architecture without the global readout term is called SUMGNN. + +# 3 Related work + +Graph neural networks are flexible models which can be applied to graphs of any size following training. This makes an asymptotic analysis in the size of the input graphs very appealing, since such a study could lead to a better understanding of the extrapolation capabilities of GNNs, which is widely studied in the literature [41, 40]. Previous studies of the asymptotic behaviour of GNNs have focused on convergence to theoretical limit networks [20, 31] and their stability under the perturbation of large graphs [11, 22]. + +Zero-one laws have a rich history in first-order logic and random graph theory [13, 8, 25, 34, 6]. Being the first of its kind in the graph machine learning literature, our study establishes new links between graph representation learning, probability theory, and logic, while also presenting a new and interesting way to analyze the expressive power of GNNs. It is well-known that the expressive power of MPNNs is upper bounded by the $l$ -dimensional Weisfeiler Leman graph isomorphism test (1-WL) [39, 29] and architectures such as $\mathrm{SUMGNN}^+$ [29] can match this. Barceló et al. [4] further gives a logical characterization for a class of MPNNs showing $\mathrm{SUMGNN}^+$ can capture any + +function which can be expressed in the logic $C^2$ , which is an extension of the two-variable fragment of first-order logic with counting quantifiers. Several works study the expressive power of these models under the assumption that there are unique node identifiers [26], or define higher-order GNN models [29, 27, 28, 19] to obtain more expressive architectures. + +Our work has direct implications on GNNs using random node features [32, 1], which are universal in the bounded graph domain. Specifically, we derive a zero-one law for GNNs using random node features which puts an upper bound on the expressive power of such models in a uniform sense: what class of functions on graphs can be captured by a single GNN with random node features? Abboud et al. [1] prove a universality result for these models, but it is not uniform, since the construction depends on the graph sizes, and yields a different model parametrization depending on the choice of the graph sizes. Moreover, the construction of Abboud et al. [1] is of size exponential in the worst case. Grohe [15] recently improved upon this result, by proving that the functions that can be computed by a polynomial-size bounded-depth family of GNNs using random node features are exactly the functions computed by bounded depth Boolean circuits with threshold gates. This establishes an upper bound on the power of GNNs with random node features, by requiring the class of models to be of bounded depth (fixed layers) and of size polynomial. However, this result is still not uniform, since it allows the target function to be captured by different model parametrizations. There is no known upper bound for the expressive power of GNNs with random node features in the uniform setting, and our result establishes this. + +Other limitations of MPNNs include over-smoothing [23, 30] and over-squashing [2] which are related to information propagation, and are linked to using more message passing layers. The problem of over-smoothing has also been studied from an asymptotic perspective [23, 30], where the idea is to see how the node features evolve as we increase the number of layers in the network. Our study can be seen as orthogonal to this work: we conduct an asymptotic analysis in the size of the graphs rather than in the number of layers. + +# 4 Zero-one laws of graph neural networks + +# 4.1 Problem statement + +We first define graph invariants following Grohe [16]. + +Definition 4.1. A graph invariant $\xi$ is a function over graphs, such that for any pair of graphs $G_{1}$ , $G_{2}$ , and, for any isomorphism $f$ from $G_{1}$ to $G_{2}$ it holds that $\xi(G_{1}) = \xi(f(G_{2}))$ . Graph invariants for graphs with node features are defined analogously. + +Consider any GNN model $\mathcal{M}$ used for binary graph classification. It is immediate from the definition that $\mathcal{M}$ is invariant under isomorphisms of the graphs on which it acts. Hence $\mathcal{M}$ , considered as a function from graphs to $\mathbb{B} = \{0,1\}$ , is a graph invariant. In this paper, we study the asymptotic behavior of $\mathcal{M}$ as the number of nodes increases. + +One remarkable and influential result from finite model theory is the 'zero-one law' for first-order logic. A (Boolean) graph invariant $\xi$ satisfies a zero-one law if when we draw graphs $G$ from the ER distribution $\mathbb{G}(n,r)$ , as $n$ tends to infinity the probability that $\xi(G) = 1$ either tends to 0 or tends to 1. The result, due to Glebskii et al. [13] and Fagin [8], states that any graph invariant which can be expressed by a first-order formula satisfies a zero-one law. Inspired by this asymptotic analysis of first-order properties, we ask whether GNNs satisfy a zero-one law. As the input of a GNN is a graph with node features, we need to reformulate the statement of the law to incorporate these features. + +Definition 4.2. Let $\mathcal{G} = (\mathbf{A},\mathbf{X})$ be a graph with node features, where $\mathbf{A}\sim \mathbb{G}(n,r)$ is a graph adjacency matrix and, independently, $\mathbf{X}$ is a matrix of node embeddings, where $\mathbf{x}_v\sim \mathbb{D}(d)$ for every node $v$ . A graph invariant $\xi$ for graphs with node features satisfies a zero-one law with respect to $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ if, as $n$ tends to infinity, the probability that $\xi (\mathcal{G}) = 1$ tends to either 0 or 1. + +Studying the asymptotic behavior of GNNs helps to shed light on their capabilities and limitations. A zero-one law establishes a limit on the ability of such models to extrapolate to larger graphs: any GNN fitted to a finite set of datapoints will tend towards outputting a constant value on larger and larger graphs drawn from the distribution described above. A zero-one law in this setting also transfers to a corresponding zero-one law for GNNs with random features. This establishes an upper-bound on the uniform expressive power of such models. + +# 4.2 Graph convolutional networks obey a zero-one law + +Our main result in this subsection is that (Boolean) GCN classifiers obey a zero-one law. To achieve our result, we place some mild conditions on the model and initial node embeddings. + +First, our study covers sub-Gaussian random vectors, which in particular include all bounded random vectors, and all multivariate normal random vectors. We note that in every practical setup all node features have bounded values (determined by the bit length of the storage medium), and are thus sub-Gaussian. + +Definition 4.3. A random vector $\mathbf{x} \in \mathbb{R}^d$ is sub-Gaussian if there is $C > 0$ such that for every unit vector $\mathbf{y} \in \mathbb{R}^d$ the random variable $\mathbf{x} \cdot \mathbf{y}$ satisfies the sub-Gaussian property; that is, for every $t > 0$ : + +$$ +\mathbb {P} (| \mathbf {x} \cdot \boldsymbol {y} | \geq t) \leq 2 \exp \left(- \frac {t ^ {2}}{C ^ {2}}\right) +$$ + +Second, we require that the non-linearity $\sigma$ be Lipschitz continuous. This is again a mild assumption, because all non-linearities used in practice are Lipschitz continuous, including ReLU, clipped ReLU, sigmoid, linearized sigmoid and tanh. + +Definition 4.4. A function $f\colon \mathbb{R}\to \mathbb{R}$ is Lipschitz continuous if there is $C > 0$ such that for any $x,y\in \mathbb{R}$ it holds that $|f(x) - f(y)|\leq C|x - y|$ . + +Third, we place a condition on the GCN weights with respect to the classifier function $\mathfrak{C}\colon \mathbb{R}^{d(T)}\to \mathbb{B}$ which intuitively excludes a specific weight configuration. + +Definition 4.5. Consider a distribution $\mathbb{D}(d)$ with mean $\pmb{\mu}$ . Let $\mathcal{M}$ be a GCN used for binary graph classification. Define the sequence $\pmb{\mu}_0, \dots, \pmb{\mu}_T$ of vectors inductively by $\pmb{\mu}_0 \coloneqq \pmb{\mu}$ and $\pmb{\mu}_t \coloneqq \sigma(\pmb{W}_n^{(t)}\pmb{\mu}_{t-1} + \pmb{b}^{(t)})$ . The classifier classifier $\mathfrak{C}: \mathbb{R}^{d(T)} \to \mathbb{B}$ is non-splitting for $\mathcal{M}$ if the vector $\pmb{\mu}_T$ does not lie on a decision boundary for $\mathfrak{C}$ . + +For all reasonable choices of $\mathfrak{C}$ , the decision boundary has dimension lower than the $d(T)$ , and is therefore a set of zero-measure. This means that in practice essentially all classifiers are non-splitting. + +Given these conditions, we are ready to state our main theorem: + +Theorem 4.6. Let $\mathcal{M}$ be a GCN used for binary graph classification and take $r\in [0,1]$ . Then, $\mathcal{M}$ satisfies a zero-one law with respect to graph distribution $\mathbb{G}(n,r)$ and feature distribution $\mathbb{D}(d)$ assuming the following conditions hold: (i) the distribution $\mathbb{D}(d)$ is sub-Gaussian, (ii) the nonlinearity $\sigma$ is Lipschitz continuous, (iii) the graph-level representation uses average pooling, and (iv) the classifier $\mathfrak{C}$ is non-splitting. + +The proof hinges on a probabilistic analysis of the preactivations in each layer. We use a sub-Gaussian concentration inequality to show that the deviation of each of the first-layer preactivations $\mathbf{y}_v^{(1)}$ from its expected value becomes less and less as the number of node $n$ tends to infinity. Using this and the fact that $\sigma$ is Lipschitz continuous we show then that each activation $\mathbf{x}_v^{(1)}$ tends towards a fixed value. Iterating this analysis through all the layers of the network yields the following key lemma, which is the heart of the argument. + +Lemma 4.7. Let $\mathcal{M}$ and $\mathbb{D}(d)$ satisfy the conditions in Theorem 4.6. Then, for every layer $t$ , there is $\pmb{z}_t \in \mathbb{R}^{d(t)}$ such that when sampling a graph with node features from $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ , for every $i \in \{1,\dots,d(t)\}$ and for every $\epsilon > 0$ we have that: + +$$ +\mathbb {P} \left(\forall v: \left|\left[ \mathbf {x} _ {v} ^ {(t)} - \boldsymbol {z} _ {t} \right] _ {i} \right| < \epsilon\right)\rightarrow 1 a s n \rightarrow \infty +$$ + +With the lemma established, the proof of Theorem 4.6 follows straightforwardly from the last two assumptions. Since the final node embeddings $\mathbf{x}_v^{(T)}$ tend to a fixed value $z_{T}$ , the average-pooled graph-level representations also tend to this. Since we assume that the classifier is non-splitting, this value cannot lie on a decision boundary, and thus the final output is asymptotically stable at $\mathfrak{C}(z_T)$ . + +We expect that the rate of converge will depend in a complex way on the number of layers, the embedding dimensionality, and the choice of non-linearity, which makes a rigorous analysis very challenging. However, considering the manner in which Lemma 4.7 is proved, we can arrive at the + +following intuitive argument for why the rate of convergence should decrease as the embedding dimensionality increases: if we fix a node $v$ and a layer $t$ then each of the components of its preactivation can be viewed as the weighted sum of $d(t - 1)$ random variables, each of which is the aggregation of activations in the previous layer. Intuitively, as $d(t - 1)$ increases, the variance of this sum also increases. This increased variance propagates through the network, resulting in a higher variance for the final node representations and thus a slower convergence. + +Using analogous assumptions and techniques to those presented in this section, we also establish a zero-one law for MEANGNN $^{+}$ , which we report in detail in Appendix B. Abstracting away from technicalities, the overall structure of the proofs for MEANGNN $^{+}$ and GCN is very similar, except that for the former case we additionally need to take care of the global readout component. + +# 4.3 Graph neural networks with sum aggregation obey a zero-one law + +The other variant of GNNs we consider are those with sum aggregation. The proof in the case works rather differently, and we place different conditions on the model. + +Definition 4.8. A function $\sigma \colon \mathbb{R} \to \mathbb{R}$ is eventually constant in both directions if there are $x_{-\infty}, x_{\infty} \in \mathbb{R}$ such that $\sigma(y)$ is constant for $y < x_{-\infty}$ and $\sigma(y)$ is constant for $y > x_{\infty}$ . We write $\sigma_{-\infty}$ to denote the minimum and $\sigma_{\infty}$ to denote the maximum value of an eventually constant function $\sigma$ . + +This means that there is a threshold $(x_{-\infty})$ below which sigma is constant, and another threshold $(x_{\infty})$ above which sigma is constant. Both the linearized sigmoid and clipped ReLU are eventually constant in both directions. Moreover, when working with finite precision any function with vanishing gradient in both directions (such as sigmoid) can be regarded as eventually constant in both directions. + +We also place the following condition on the weights of the model with respect to the mean of $\mathbb{D}(d)$ and the edge-probability $r$ . + +Definition 4.9. Let $\mathcal{M}$ be any $\mathrm{SUMGNN}^+$ for binary graph classification with a non-linearity $\sigma$ which is eventually constant in both directions. Let $\mathbb{D}(d)$ be any distribution with mean $\mu$ , and let $r \in [0,1]$ . Then, the model $\mathcal{M}$ is synchronously saturating for $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ if the following conditions hold: + +1. For each $1 \leq i \leq d(1)$ : + +$$ +\left[ \left(r \boldsymbol {W} _ {n} ^ {(1)} + \boldsymbol {W} _ {g} ^ {(1)}\right) \boldsymbol {\mu} \right] _ {i} \neq 0 +$$ + +2. For every layer $1 < t \leq T$ , for each $1 \leq i \leq d(t)$ and for each $z \in \{\sigma_{-\infty}, \sigma_{\infty}\}^{d(t-1)}$ : + +$$ +\left[ \left(r \boldsymbol {W} _ {n} ^ {(t)} + \boldsymbol {W} _ {g} ^ {(t)}\right) \boldsymbol {z} \right] _ {i} \neq 0 +$$ + +Analysis of our proof of the zero-one law for $\mathrm{SUMGNN}^+$ models (Theorem 4.10 below) reveals that the asymptotic behaviour is determined by the matrices $Q_{t}\coloneqq rW_{n}^{(t)} + W_{g}^{(t)}$ , where the asymptotic final layer embeddings are $\sigma (Q_T(\sigma (Q_{T - 1}\dots \sigma (Q_0\pmb {\mu})\dots)))$ . To be synchronously saturating is to avoid the boundary case where one of the intermediate steps in the asymptotic final layer embedding computation has a zero component. + +Similarly to the case of a non-splitting classifier, the class of synchronously saturating models is very wide. Indeed, the space of models which are not synchronously saturating is the union of solution space of each equality (i.e. the negation of an inequality in Definition 4.9). Thus, assuming that $\mu$ , $\sigma_{-\infty}$ and $\sigma_{\infty}$ are non-zero, the space of non-synchronously-saturating models has lower dimension than the space of all models, and thus has measure zero. + +With these definitions in place we can now lay out the main result: + +Theorem 4.10. Let $\mathcal{M}$ be a SUMGNN $^+$ model used for binary graph classification and take $r \in [0,1]$ . Then, $\mathcal{M}$ satisfies a zero-one law with respect to graph distribution $\mathbb{G}(n,r)$ and feature distribution $\mathbb{D}(d)$ assuming the following conditions hold: (i) the distribution $\mathbb{D}(d)$ is sub-Gaussian, (ii) the non-linearity $\sigma$ is eventually constant in both directions, (iii) the graph-level representation uses either average or component-wise maximum pooling, and (iv) $\mathcal{M}$ is synchronously saturating for $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ . + +The proof works differently than the GCN and MEANGNN $^+$ cases, but still rests on a probabilistic analysis of the preactivations in each layer. Assuming that $\mathcal{M}$ is synchronously saturating for $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ , we can show that the expected absolute value of each preactivation tends to infinity as the number of nodes increases, and that moreover the probability that it lies below any fixed value tends to 0 exponentially. Hence, the probability that all node embeddings after the first layer are the same and have components which are all $\sigma_{-\infty}$ or $\sigma_{\infty}$ tends to 1. We then extend this analysis to further layers, using the fact that $\mathcal{M}$ is synchronously saturating, which yields inductively that all node embeddings are the same with probability tending to 1, resulting in the following key lemma. + +Lemma 4.11. Let $\mathcal{M}$ , $\mathbb{D}(d)$ and $r$ be as in Theorem 4.10. Let $\sigma_{-\infty}$ and $\sigma_{\infty}$ be the extremal values taken by the non-linearity. Then, for every layer $t$ , there is $\boldsymbol{z}_t \in \{\sigma_{-\infty}, \sigma_{\infty}\}^{d(t)}$ such that when we sample graphs with node features from $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ the probability that $\mathbf{x}_v^{(t)} = \boldsymbol{z}_t$ for every node $u$ tends to 1 as $n$ tends to infinity. + +The final classification output must therefore be the same asymptotically, since its input consists of node embeddings which always take the same value. + +# 5 Graph neural networks with random node features + +Up to this point we have been considering the graph plus node features as the (random) input to GNNs. In this section, we make a change in perspective and regard the initial node features as part of the model, so that its input consists solely of the graph without features. We focus in this section on $\mathrm{SUMGNN}^+$ . Adding random initial features to GNNs is known to increase their power [32]. + +Note that Theorem 4.10 immediately yields a zero-one law for these models. This places restrictions on what can be expressed by $\mathrm{SUMGNN}^+$ models with random features subject to the conditions of Theorem 4.10. For example, it is not possible to express that the number of graph nodes is even, since the property of being even does not satisfy a zero-one law with respect to any $r\in [0,1]$ . + +It is natural to wonder how tight these restrictions are: what precisely is the class of functions which can be approximated by these models? Let us first formalize the notion of approximation. + +Definition 5.1. Let $f$ be a Boolean function on graphs, and let $\zeta$ be a random function on graphs. Take $\delta > 0$ . Then $\zeta$ uniformly $\delta$ -approximates $f$ if: + +$$ +\forall n \in \mathbb {N}: \mathbb {P} (\zeta (G) = f (G) \mid | G | = n) \geq 1 - \delta +$$ + +when we sample $G\sim \mathbb{G}(n,1 / 2)$ + +The reason for sampling graphs from $\mathbb{G}(n,1 / 2)$ is that under this distribution all graphs on $n$ nodes are equally likely. Therefore, the requirement is the same as that for every $n\in \mathbb{N}$ the proportion of $n$ -node graphs on which $\zeta (G) = f(G)$ is at least $1 - \delta$ + +Building on results due to Abboud et al. [1], we show a partial converse to Theorem 4.10: if a graph invariant satisfies a zero-one law for $\mathbb{G}(n,1 / 2)$ then it can be universally approximated by a $\mathrm{SUMGNN}^+$ with random node features. + +Theorem 5.2. Let $\xi$ be any graph invariant which satisfies a zero-one law with respect to $\mathbb{G}(n,1 / 2)$ . Then, for every $\delta >0$ there is a $\mathrm{SUMGNN}^+$ with random node features $\mathcal{M}$ which uniformly $\delta$ -approximates $\xi$ . + +The basis of the proof is a result due to Abboud et al. [1] which states that a $\mathrm{SUMGNN}^+$ with random node features can approximate any graph invariant on graphs of bounded size. When the graph invariant satisfies a zero-one law, we can use the global readout to count the number of nodes. Below a certain threshold, we use the techniques of Abboud et al. [1] to approximate the invariant, and above the threshold we follow its asymptotic behavior. We emphasise that the combination of these techniques yields a model which provides an approximation which is uniform across all graph sizes. + +# 6 Experimental evaluation + +We empirically verify our theoretical findings on a carefully designed synthetic experiment using ER graphs with random features. The goal of these experiments is to answer the following questions for each model under consideration: + +Q1. Do we empirically observe a zero-one law? +Q2. What is the rate of convergence like empirically? +Q3. What is the impact of the number of layers on the convergence? + +# 6.1 Experimental setup + +We report experiments for GCN, MEANGNN, and SUMGNN. The following setup is carefully designed to eliminate confounding factors: + +- We consider 10 GNN models of the same architecture each with randomly initialized weights, where each weight is sampled independently from $U(-1,1)$ . The non-linearity is eventually constant in both directions: identity between $[-1,1]$ , and truncated to $-1$ if the input is smaller than $-1$ , and 1 if the input is greater than 1. In the appendix we include additional experiments in which test other choices of non-linearity (see Appendix E.2). We apply mean pooling to yield a final representation $\mathbf{z}_G \in \mathbb{R}^d$ of the input graph. +- For every model, we apply a final classifier $\sigma(f): \mathbb{R}^d \to \mathbb{B}$ where $f$ is a 2-layer MLP with random weights and with tanh activation, which outputs a real value, and $\sigma$ is the sigmoid function. Graphs are classified as 1 if the output of the sigmoid is greater than 0.5, and 0 otherwise. +- The input graphs are drawn from $\mathbb{G}(n, 1/2)$ with corresponding node features independently drawn from $U(0, 1)$ . +- We conduct these experiments with three choices of layers: 10 models with $T = 1$ layer, 10 models with $T = 2$ layers, and 10 models with $T = 3$ layers. + +The goal of these experiments is to understand the behavior of the respective GNN graph classifiers with mean-pooling, as we draw larger and larger ER graphs. Specifically, each model classifies graphs of varying sizes, and we are interested in knowing how the proportion of the graphs which are classified as 1 evolves, as we increase the graph sizes. + +We independently sample 10 models to ensure this is not a model-specific behavior, aiming to observe the same phenomenon across the models. If there is a zero-one law, then for each model, we should only see two types of curves: either tending to 0 or tending to 1, as graph sizes increase. Whether it will tend to 0 or 1 depends on the final classifier: since each of these are independent MLPs with random weights the specific outcome is essentially random. + +We consider models with up to 3 layers to ensure that the node features do not become alike because of the orthogonal over-smoothing issue [24], which surfaces with increasing number of layers. A key feature of our theoretical results is that they do not depend on the number of layers, and this is an aspect which we wish to validate empirically. Using models with random weights is a neutral setup, and random GNNs are widely used in the literature as baseline models [36], as they define valid graph convolutions and tend to perform reasonably well. + +# 6.2 Empirical results + +We report all results in Figure 1 for all models considered and discuss them below. Each plot in this figure depicts the curves corresponding to the behavior of independent models with random weights. + +GCN. For this experiment, we use an embedding dimensionality of 128 for each GCN model and draw graphs of sizes up 5000, where we take 32 samples of each size. The key insight of Theorem 4.6 is that the final mean-pooled embedding vector $\mathbf{z}_G$ tends to a constant vector as we draw larger graphs. Applying an MLP followed by a sigmoid function will therefore map $\mathbf{z}_G$ to either 0 or 1, showing a zero-one law. It is evident from Figure 1 (top row) that all curves tend to either 0 or 1, confirming our expectation regarding the outcome of these experiments for GCNs. Moreover, this holds regardless of the number of layers considered. Since the convergence occurs quickly, already around graphs of size of 1000, we did not experiment with larger graph sizes in this experiment. + +MEANGNN. Given that the key insight behind this result is essentially similar to that of Theorem 4.6, we follow the exact same configuration for these models as for GCNs. The proof structure is the same in both cases: we show that the preactivations and activations become closer and closer to + +![](images/3f910bd7fbb718bbe9bd30d10271d640cb32fdb915ce4b1acfb5f5e837333323.jpg) + +![](images/14b4632b23aeb223c90e988777a2f9dd86d2b71cbca17b9c357a74f31b833fe7.jpg) + +![](images/f5200045b93285aa02b2b7af976d46418e6119377ae56489649069067a4ea5e1.jpg) + +![](images/cdf3042a79243144d42e96624dae7684aafd0c1f5fa78371cb0aee146576a0a8.jpg) + +![](images/bc82b18a607a72b80134372c393b4ea6cb993b9601f8f5e9f7dff158e5adeddd.jpg) + +![](images/2b58e85cb9f5edbb77f503571c12490f1743eaff30f7eb9a8d563ae5b2028e68.jpg) + +![](images/d13a253380a7a3d5b2f1dc4503c861bff517c32ecf828ebc570f0872eb084d96.jpg) +Figure 1: Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCNs (top row), MEANGNNs (middle), and SUMGNNs (bottom row). Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). GCNs and MEANGNNs behave very similarly with all models converging quickly to 0 or to 1. SUMGNNs show slightly slower convergence, but all models perfectly converge in all layers. + +![](images/879fb4b9c3cbffec32f3d6542752a71a595d4e25455550b8394ee1e6b47ea5a1.jpg) + +![](images/3371ee8c4874cf4da4e0bd5ef6b673bd8806bb77d9a4aa9ced4e3229bd14aad0.jpg) + +some fixed values as the number of nodes increases. Moreover, comparing the summations in the definitions of GCN and MEANGNN, on a typical ER graph drawn from $\mathbb{G}(n,1 / 2)$ we would expect each corresponding summand to have a similar value, since $\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}$ should be close to $|\mathcal{N}^{+}(v)|$ . Figure 1(mid row) illustrates the results for MEANGNN and the trends are reassuringly similar to those of GCNs: all models converge quickly to either 0 and 1 with all choices of layers. Interestingly, the plots for GCN and MEANGNN models are almost identical. We used the same seed when drawing each of the model weights, and the number of parameters is the same between the two. Hence, the GCN models were parameterized with the same values as the MEANGNN models. The fact that each pair of models preforms near identically confirms the expectation that the two architectures work in similar ways on ER graphs. + +SUMGNN. Theorem 4.10 shows that, as the number of nodes grow, the embedding vector $\mathbf{z}_v$ of each node $v$ will converge to a constant vector with high probability. Hence, when we do mean-pooling at the end, we expect to get the same vector for different graphs of the same size. The mechanism by which a zero-one law is arrived at is quite different compared with the GCN and MEANGNN case. In particular, in order for the embedding vectors to begin to converge, there must be sufficiently many nodes so that the preactivations surpass the thresholds of the non-linearity. For this experiment, we use a smaller embedding dimensionality of 64 for each SUMGNN model and draw graphs of sizes up to 100000, where we take 32 samples of each size. Figure 1 shows the results for SUMGNN. Note that we observe a slower convergence than with GCN or MEANGNN. + +# 7 Limitations, discussion, and outlook + +The principal limitations of our work come from the assumptions placed on the main theorems. In our formal analysis, we focus on graphs drawn from the ER distribution. From the perspective of characterizing the expressiveness of GNNs this is unproblematic, and accords with the classical analysis of first-order properties of graphs. However, when considering the extrapolation capacity of GNNs, other choices of distributions may be more realistic. In Appendices E.4 and E.5 we report experiments in which zero-one laws are observed empirically for sparse ER graphs and Barabási-Albert graphs [3], suggesting that formal results may be obtainable. While we empirically observe that GNNs converge to their asymptotic behaviour very quickly, we leave it as future work to rigorously examine the rate at which this convergence occurs. + +In this work we show that GNNs with random features can at most capture properties which follow a zero-one law. We complement this with an almost matching lower bound: Theorem 5.2 currently requires a graph invariant $\xi$ which obeys a zero-one law with respect to a specific value of $r$ (i.e., $1/2$ ), and if this assumption could be relaxed, it would yield a complete characterization of the expressive power of these models. + +# 8 Acknowledgments + +We would like to thank the anonymous reviewers for their feedback, which lead to several improvements in the presentation of the paper. The first author was supported by an EPSRC studentship with project reference 2271793. + +# References + +[1] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI, pages 2112-2118, 2021. +[2] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In Proceedings of the Ninth International Conference on Learning Representations, ICLR, 2021. +[3] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286(5439):509-512, 1999. +[4] Pablo Barceló, Egor V. Kostylev, Mikaël Monet, Jorge Pérez, Juan L. Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In Proceedings of the Eighth International Conference on Learning Representations, ICLR, 2020. +[5] Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks, 2018. +[6] Béla Bollobás. *Random Graphs*. Cambridge Studies in Advanced Mathematics. Cambridge University Press, second edition, 2001. +[7] David Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gomez-Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Proceedings of the Twenty-Eighth Annual Conference on Advances in Neural Information Processing Systems, NIPS, pages 2224-2232, 2015. +[8] Ronald Fagin. Probabilities on finite models. The Journal of Symbolic Logic, JSL, 41(1):50-58, 1976. ISSN 00224812. +[9] Peter A. Flach. First-order logic. In Claude Sammut and Geoffrey I. Webb, editors, Encyclopedia of Machine Learning, pages 410-415. Springer US, Boston, MA, 2010. ISBN 978-0-387-30164-8. doi: 10.1007/978-0-387-30164-8_311. + +[10] Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. Protein interface prediction using graph convolutional networks. In Proceedings of the Thirtieth Annual Conference on Advances in Neural Information Processing Systems, NIPS, pages 6530-6539, 2017. +[11] Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability properties of graph neural networks. IRE Transactions on Audio, 68:5680-5695, 2020. ISSN 1053-587X. +[12] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the Thirty-Fourth International Conference on Machine Learning, ICML, pages 1263-1272, 2017. +[13] Yu V Glebskii, DI Kogan, MI Liogonkii, and VA Talanov. Volume and fraction of satisfiability of formulas of the lower predicate calculus. *Kibernetika*, 2:17-27, 1969. +[14] Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, IJCNN, volume 2, pages 729-734, 2005. +[15] M. Grohe. The descriptive complexity of graph neural networks. In 2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS, pages 1-14, Los Alamitos, CA, USA, jun 2023. IEEE Computer Society. doi: 10.1109/LICS56636.2023.10175735. +[16] Martin Grohe. The logic of graph neural networks. In Proceedings of the 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781665448956. +[17] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the Thirty-First Annual Conference on Advances in Neural Information Processing Systems, NIPS. Curran Associates, Inc., 2017. +[18] Steven M. Kearnes, Kevin McCloskey, Marc Berndl, Vijay S. Pande, and Patrick Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer Aided Molecular Design, 30(8):595-608, 2016. +[19] Nicolas Keriven and Gabriel Peyre. Universal invariant and equivariant graph neural networks. In Proceedings of the Thirty-Second Annual Conference on Advances in Neural Information Processing Systems, NeurIPS, pages 7090–7099, 2019. +[20] Nicolas Keriven, Alberto Bietti, and Samuel Vaiter. Convergence and stability of graph convolutional networks on large random graphs. In Proceedings of the Thirty-Fourth Annual Conference on Advances in Neural Information Processing Systems, NeurIPS, pages 21512-21523. Curran Associates, Inc., 2020. +[21] Thomas Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the Fifth International Conference on Learning Representations, ICLR, 2017. +[22] Ron Levie, Wei Huang, Lorenzo Bucci, Michael Bronstein, and Gitta Kutyniok. Transferability of spectral graph convolutional neural networks. Journal of Machine Learning Research, 22(1), 2021. ISSN 1532-4435. +[23] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI, pages 3538–3545, 2018. +[24] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In Proceedings of the Fourth International Conference on Learning Representations, ICLR, 2016. +[25] Leonid Libkin. Zero-One Laws, pages 235-248. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. ISBN 978-3-662-07003-1. +[26] Andreas Loukas. What graph neural networks cannot learn: depth vs width. In Proceedings of the Eighth International Conference on Learning Representations, ICLR, 2020. + +[27] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In Proceedings of the Thirty-Second Annual Conference on Advances in Neural Information Processing Systems, NeurIPS, pages 2153–2164, 2019. +[28] Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In Proceedings of the Thirty-Sixth International Conference on Machine Learning, ICML, pages 4363-4371, 2019. +[29] Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, AAAI, pages 4602–4609, 2019. +[30] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In Proceedings of the Eighth International Conference on Learning Representations, ICLR, 2020. +[31] Luana Ruiz, Luiz Chamon, and Alejandro Ribeiro. Graphon neural networks and the transferability of graph neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Proceedings of the Thirty-Fourth Annual Conference on Advances in Neural Information Processing Systems, NeurIPS, pages 1702-1712. Curran Associates, Inc., 2020. +[32] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining, SDM, pages 333-341, 2021. +[33] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. +[34] Saharon Shelah and Joel Spencer. Zero-one laws for sparse random graphs. Journal of the American Mathematical Society, 1(1):97-115, 1988. ISSN 08940347, 10886834. +[35] Jonathan Shlomi, Peter Battaglia, and Jean-Roch Vlimant. Graph neural networks in particle physics. Machine Learning: Science and Technology, 2(2):021001, 2021. +[36] Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, and Graham W. Taylor. On evaluation metrics for graph generative models. In Proceedings of the Tenth International Conference on Learning Representations, ICLR, 2022. +[37] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In Proceedings of the Sixth International Conference on Learning Representations, ICLR, 2018. +[38] Roman Vershynin. High-dimensional probability. Number 47 in Cambridge series on statistical and probabilistic mathematics. Cambridge University Press, Cambridge, 2018. ISBN 9781108415194. +[39] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In Proceedings of the Seventh Annual Conference on Learning Representations, ICLR, 2019. +[40] Keyulu Xu, Mozhi Zhang, Jingling Li, Simon Shaolei Du, Ken-Ichi Kawarabayashi, and Stefanie Jegelka. How neural networks extrapolate: From feedforward to graph neural networks. In Proceedings of the Ninth International Conference on Learning Representations, ICLR, 2021. +[41] Gilad Yehudai, Ethan Fetaya, Eli A. Meirom, Gal Chechik, and Haggai Maron. From local structures to size generalization in graph neural networks. In International Conference on Machine Learning, ICML, 2020. +[42] Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457-i466, 2018. + +# A Proof of the zero-one law for GCN + +The proof of Lemma 4.7 and subsequently Theorem 4.6 relies on an asymptotic analysis of the distributions of the node embeddings at each layer. The following famous concentration inequality for sub-Gaussian random variables allows us to put bounds on the deviation of a sum of random variables from its expected value. + +Theorem A.1 (Hoeffding Inequality for sub-Gaussian random variables). There is a universal constant $c$ such that the following holds. Let $z_1, \ldots, z_N$ be independent sub-Gaussian scalar random variables with mean 0. Assume that the constants $C$ from Definition 4.3 for each $z_i$ can be bounded by $K$ . Then for all $t > 0$ : + +$$ +\mathbb {P} \left(\left| \sum_ {i = 1} ^ {N} \mathbf {z} _ {i} \right| \geq t\right) \leq \exp \left(- \frac {c t ^ {2}}{K ^ {2} N}\right) +$$ + +Proof. See Theorem 2.6.2 in [38]. + +![](images/099b6678fca5368ad359769e052b9c3d17b644c8602704c9b0a6a222b635e89b.jpg) + +We also make use of the following three basic facts about sub-Gaussian random variables. + +Lemma A.2. If $\mathbf{z}$ is a sub-Gaussian random vector and $\mathbf{q}$ is any vector of the same dimension then $\mathbf{q} \cdot \mathbf{z}$ is sub-Gaussian. + +Proof. This follows directly from Definition 4.3. $\square$ + +![](images/1045589aefc4184e5f24aff388f82f52994a5c1d91ef21c8eadf5cff53cbbc19.jpg) + +Lemma A.3. If $z$ is a sub-Gaussian scalar random variable then so is $z - \mathbb{E}[z]$ . + +Proof. See Lemma 2.6.8 in [38]. + +![](images/5d1108d90dadb4c3730f66669fdb2ffaceb09ed1f0e1033df34d0268e35e5b35.jpg) + +Lemma A.4. If $z$ is a sub-Gaussian scalar random variable and $a$ is an independent Bernoulli random variable then $za$ is sub-Gaussian. + +Proof. Let $C$ be the constant given by Definition 4.3 for $z$ . Let a take values $\alpha$ and $\beta$ . Using the Law of Total Probability: + +$$ +\begin{array}{l} \mathbb {P} (| \mathbf {z a} | \geq t) = \mathbb {P} (| \mathbf {z a} | \geq t \mid \mathbf {a} = \alpha) \mathbb {P} (\mathbf {a} = \alpha) + \mathbb {P} (| \mathbf {z a} | \geq t \mid \mathbf {a} = \beta) \mathbb {P} (\mathbf {a} = \beta) \\ = \mathbb {P} (| \mathbf {z} | \geq t / | \alpha |) \mathbb {P} (\mathbf {a} = \alpha) + \mathbb {P} (| \mathbf {z} | \geq t / | \beta |) \mathbb {P} (\mathbf {a} = \beta) \\ \leq 2 \exp \left(- \frac {t ^ {2}}{| \alpha | ^ {2} C ^ {2}}\right) \mathbb {P} (\mathbf {a} = \alpha) + 2 \exp \left(- \frac {t ^ {2}}{| \beta | ^ {2} C ^ {2}}\right) \mathbb {P} (\mathbf {a} = \beta) \\ \leq 2 \exp \left(- \frac {t ^ {2}}{\operatorname* {m a x} \{| \alpha | , | \beta | \} ^ {2} C ^ {2}}\right) \\ \end{array} +$$ + +Therefore $\text{za}$ is sub-Gaussian. + +![](images/05fa898bb01a24238c91704d3af4a090e30851cd12b3c72892baa1f9e3f7854f.jpg) + +We first prove the key lemma regarding the node embeddings. + +Proof of Lemma 4.7. Let $C$ be the Lipschitz constant for $\sigma$ . Start by considering the first layer preactivations $\mathbf{y}_v^{(1)}$ and drop superscript (1)'s for notational clarity. We have that: + +$$ +\mathbf {y} _ {v} = \sum_ {v \in \mathcal {N} ^ {+} (v)} \frac {1}{\sqrt {| \mathcal {N} (v) | | \mathcal {N} (u) |}} \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} + \boldsymbol {b} +$$ + +Fix $i \in \{1, \dots, d(1)\}$ . The deviation from the expected value in the $i$ th component is as follows: + +$$ +| [ \mathbf {y} _ {v} - \mathbb {E} [ \mathbf {y} _ {v} ] ] _ {i} | = \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \frac {1}{\sqrt {| \mathcal {N} (v) | | \mathcal {N} (u) |}} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| +$$ + +Now every $|\mathcal{N}(u)|$ is a sum of $n$ independent 0-1 Bernoulli random variables with success probability $r$ (since the graph is sampled from an Erdős-Rényi distribution). Since Bernoulli random variables are sub-Gaussian (Lemma A.4) we can use Hoeffding's Inequality to bound the deviation of $|\mathcal{N}^{+}(u)| =$ + +$|\mathcal{N}(u)| + 1$ from its expected value $nr + 1$ . By Theorem A.1 there is a constant $K$ such that for every $\gamma \in (0,1)$ and node $u$ : + +$$ +\begin{array}{l} \mathbb {P} \left(| \mathcal {N} ^ {+} (u) | \leq \gamma n r\right) \leq \mathbb {P} \left(\left| \left| \mathcal {N} ^ {+} (u) \right| - n r \right| \geq (1 - \gamma) n r\right) \\ \leq \mathbb {P} (| | \mathcal {N} (u) | - n r | \geq (1 - \gamma) n r - 1) \\ \leq 2 \exp \left(- \frac {K ((1 - \gamma) n r - 1) ^ {2}}{n}\right) \\ \end{array} +$$ + +This means that, taking a union bound: + +$$ +\mathbb {P} (\forall u \in V \colon | \mathcal {N} ^ {+} (u) | \geq \gamma n r) \geq 1 - 2 n \exp \left(- \frac {K _ {0} ((1 - \gamma) n r - 1) ^ {2}}{n}\right) +$$ + +Fix $i\in \{1,\ldots ,d(1)\}$ . In the case where $\forall u\in V\colon |\mathcal{N}^{+}(u)|\geq \gamma nr$ we have that: + +$$ +\left. \right.\left|\left[ \mathbf {y} _ {v} - \mathbb {E} \left[ \mathbf {y} _ {v} \right]\right] _ {i} \right| \leq \frac {1}{\sqrt {\left| \mathcal {N} (v) \right| \gamma n r}} \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right]\right] _ {i} \right| +$$ + +Now, by Lemma A.2 and Lemma A.3 each $\mathbf{W}_n\mathbf{x}_u^{(0)} - \mathbb{E}\left[\mathbf{W}_n\mathbf{x}_u^{(0)}\right]$ is sub-Gaussian. We can thus apply Hoeffding's Inequality (Theorem A.1) to obtain a constant $K$ such that for every $t > 0$ we have: + +$$ +\begin{array}{l} \mathbb {P} \left(\left| \left[ \mathbf {y} _ {v} - \mathbb {E} [ \mathbf {y} _ {v} ] \right] _ {i} \right| \geq t\right) \leq \mathbb {P} \left(\left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \geq t \sqrt {\left| \mathcal {N} (v) \right| \gamma n r}\right) \\ \leq 2 \exp \left(- \frac {K t ^ {2} | \mathcal {N} (v) | \gamma n r}{| \mathcal {N} ^ {+} (v) |}\right) \\ \leq 2 \exp \left(- \frac {K t ^ {2} \gamma n r}{2}\right) \\ \end{array} +$$ + +Now using the Law of Total Probability, partitioning depending on whether $\forall u\in V\colon |\mathcal{N}^{+}(u)|\geq \gamma nr$ we get a bound as follows: + +$$ +\mathbb {P} (| [ \mathbf {y} _ {v} - \mathbb {E} [ \mathbf {y} _ {v} ] ] _ {i} | \geq t) \leq 2 \exp \left(- \frac {K t ^ {2} \gamma n r}{2}\right) + 2 n \exp \left(- \frac {K _ {0} ((1 - \gamma) n r - 1) ^ {2}}{n}\right) +$$ + +From now on fix any $\gamma \in (0,1)$ . + +Let $\mathbf{z}_1 \coloneqq \sigma(\mathbb{E}[\mathbf{y}_v])$ for any $v$ (this is the same for every $v$ ). Applying the bound with $t = C\epsilon$ we can bound the deviation of $\mathbf{x}_v$ from $\mathbf{z}_1$ as follows, using the Lipschitz constant $C$ . + +$$ +\begin{array}{l} \mathbb {P} \left(\left| \left[ \mathbf {x} _ {v} - \boldsymbol {z} _ {1} \right] _ {i} \right| \geq \epsilon\right) = \mathbb {P} \left(\left| \left[ \sigma \left(\mathbf {y} _ {v}\right) - \sigma \left(\mathbb {E} \left[ \mathbf {y} _ {v} \right]\right) \right] _ {i} \right| \geq \epsilon\right) \\ \leq \mathbb {P} \left(\left| \left[ \mathbf {y} _ {v} - \mathbb {E} [ \mathbf {y} _ {v} ] \right] _ {i} \right| \geq C \epsilon\right) \\ \leq 2 \exp \left(- \frac {K C ^ {2} \epsilon^ {2} \gamma n r}{2}\right) + 2 n \exp \left(- \frac {K _ {0} ((1 - \gamma) n r - 1) ^ {2}}{n}\right) \\ \end{array} +$$ + +Taking a union bound, the probability that $|[{\bf x}_v - z_1]_i| < \epsilon$ for every node $v$ and every $i\in \{1,\ldots ,d(1)\}$ is at least: + +$$ +1 - n d (i) \mathbb {P} \left(\left| \left[ \mathbf {x} _ {v} - \boldsymbol {z} _ {1} \right] _ {i} \right| \geq \epsilon\right) +$$ + +This tends to 1 as $n$ tends to infinity, which yields the result for the first layer. + +Now consider the preactivations for the second layer: + +$$ +\mathbf {y} _ {v} ^ {(2)} = \sum_ {u \in \mathcal {N} ^ {+} (v)} \frac {1}{\sqrt {| \mathcal {N} (v) | | \mathcal {N} (u) |}} W _ {n} ^ {(2)} \mathbf {x} _ {u} ^ {(1)} + \boldsymbol {b} ^ {(2)} +$$ + +As in the single layer case, we can bound the probability that any $|\mathcal{N}(u)|$ is less than some $\gamma nr$ . Condition on the event that $\forall u \in V \colon |\mathcal{N}^{+}(u)| \geq \gamma nr$ . + +By applying the result for the first layer to $\epsilon' = \epsilon \sqrt{\gamma r} / (2C \| \mathbf{W}_n^{(2)} \|_{\infty})$ , we have that for each $i \in \{1, \dots, d(2)\}$ : + +$$ +\mathbb {P} \left(\forall v: \left|\left[ \mathbf {x} _ {v} ^ {(1)} - \mathbf {z} _ {1} \right] _ {i} \right| < \epsilon^ {\prime}\right)\rightarrow 1 \quad \text {a s} n \rightarrow \infty +$$ + +Condition additionally on the event that $|[{\bf x}_v^{(1)} - z_1]_i| < \epsilon '$ for every node $v$ and every $i\in$ $\{1,\ldots ,d(1)\}$ + +Now define: + +$$ +\boldsymbol {a} _ {2} := \sum_ {v \in \mathcal {N} ^ {+} (v)} \frac {1}{\sqrt {| \mathcal {N} (v) | | \mathcal {N} (u) |}} \boldsymbol {W} _ {n} ^ {(2)} \boldsymbol {z} _ {1} + \boldsymbol {b} ^ {(2)} +$$ + +Then we have that for every $i\in \{1,\ldots ,d(1)\}$ + +$$ +\begin{array}{l} | [ \mathbf {y} _ {v} ^ {(2)} - \boldsymbol {a} _ {2} ] _ {i} | \leq \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \frac {1}{\sqrt {| \mathcal {N} (u) | | \mathcal {N} (v) |}} \left[ \boldsymbol {W} _ {n} ^ {(2)} \left(\boldsymbol {x} _ {u} ^ {(1)} - \boldsymbol {z} _ {1}\right) \right] _ {i} \right| \\ \leq \frac {1}{\sqrt {| \mathcal {N} (v) | \gamma n r}} \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \pmb {W} _ {n} ^ {(2)} (\pmb {x} _ {u} ^ {(1)} - \pmb {z} _ {1}) \right] _ {i} \right| \\ \leq \frac {1}{\sqrt {| \mathcal {N} (v) | \gamma n r}} \left\| \boldsymbol {W} _ {n} ^ {(2)} \right\| _ {\infty} \sum_ {u \in \mathcal {N} ^ {+} (v)} \left\| \mathbf {x} _ {u} ^ {(1)} - \boldsymbol {z} _ {1} \right\| _ {\infty} \\ \leq \frac {\epsilon | \mathcal {N} ^ {+} (v) |}{2 C \sqrt {| \mathcal {N} (v) | n}} \\ \leq \frac {\epsilon (n + 1)}{2 C n} \\ \leq \frac {\epsilon}{C} \\ \end{array} +$$ + +Now let $\pmb{z}_2 \coloneqq \sigma(\pmb{a}_2)$ . As in the single-layer case we can use the bound on $|[\mathbf{y}_v^{(2)} - \pmb{a}_2]_i|$ and the fact that $\sigma$ is Lipschitz to find that, for every node $v$ and $i \in \{1, \dots, d(2)\}$ : + +$$ +\left| \left[ \mathbf {x} _ {v} ^ {(2)} - \boldsymbol {z} _ {2} \right] _ {i} \right| < \epsilon +$$ + +Since the probability that the two events on which we conditioned tends to 1, the result follows for the second layer. + +Finally, we apply the argument inductively through the layers to obtain the ultimate result. + +![](images/a7287be5850742dc35cf03ce4f3ca79ff6f48daa891516759dd34dca841bef83.jpg) + +With the key lemma established we can prove the main result. + +Proof of Theorem 4.6. By Lemma 4.7 the final node embeddings $\mathbf{x}_v^{(T)}$ deviate less and less from $z_{T}$ as the number of nodes $n$ increases. Therefore, the average-pooled graph-level representation also deviates less and less from $z_{T}$ . By inspecting the proof, we can see that this $z_{T}$ is exactly the vector $\pmb{\mu}_{T}$ in the definition of non-splitting (Definition 4.5). This means that $z_{T}$ cannot lie on a decision boundary for the classifier $\mathfrak{C}$ . Hence, there is $\epsilon >0$ such that $\mathfrak{C}$ is constant on: + +$$ +\{\boldsymbol {x} \in \mathbb {R} ^ {d (T)} \mid \forall i \in \{1, \dots , d (T) \}: [ \boldsymbol {z} _ {T} - \boldsymbol {x} ] _ {i} < \epsilon \} +$$ + +Therefore, the probability that the output of $\mathcal{M}$ is $\mathfrak{C}(z_T)$ tends to 1 as $n$ tends to infinity. + +![](images/c8a454575e29c3473e8bee0d872f02e8a97ec02d8a3a0c30d71080955937fc4b.jpg) + +# B Proof of the zero-one law for MEANGN $^{+}$ + +Let us turn now to establishing a zero-one law for GNNs using mean aggregation. We place the same conditions as with Theorem 4.6. This time the notion of 'non-splitting' is as follows. + +Definition B.1. Consider a distribution $\mathbb{D}(d)$ with mean $\pmb{\mu}$ . Let $\mathcal{M}$ be a MEANGNN $^+$ used for binary graph classification. Define the sequence $\pmb{\mu}_0, \dots, \pmb{\mu}_T$ of vectors inductively by $\pmb{\mu}_0 \coloneqq \pmb{\mu}$ and $\pmb{\mu}_t \coloneqq \sigma((\pmb{W}_n^{(t)} + \pmb{W}_r^{(t)})\pmb{\mu}_{t-1} + \pmb{b}^{(t)})$ . The classifier $\mathfrak{C}: \mathbb{R}^{d(T)} \to \mathbb{B}$ is non-splitting for $\mathcal{M}$ if the vector $\pmb{\mu}_T$ does not lie on a decision boundary for $\mathfrak{C}$ . + +Again, in practice essentially all classifiers are non-splitting. + +Theorem B.2. Let $\mathcal{M}$ be a MEANGNN $^+$ used for binary graph classification and take $r \in [0,1]$ . Then, $\mathcal{M}$ satisfies a zero-one law with respect to graph distribution $\mathbb{G}(n,r)$ and feature distribution $\mathbb{D}(d)$ assuming the following conditions hold: (i) the distribution $\mathbb{D}(d)$ is sub-Gaussian, (ii) the non-linearity $\sigma$ is Lipschitz continuous, (iii) the graph-level representation uses average pooling, (iv) the classifier $\mathfrak{C}$ is non-splitting. + +Note that the result immediately applies to the MEANGNN architecture, since it is a special case of MEANGNN+. + +The overall structure of the proof is the same as for GCN. In particular, we prove the following key lemma stating that all node embeddings tend to fixed values. + +Lemma B.3. Let $\mathcal{M}$ and $\mathbb{D}(d)$ satisfy the conditions in Theorem B.2. Then, for every layer $t$ , there is $\pmb{z}_t \in \mathbb{R}^{d(t)}$ such when sampling a graph with node features from $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ , for every $i \in \{1,\dots,d(t)\}$ and for every $\epsilon > 0$ we have that: + +$$ +\mathbb {P} \left(\forall v \colon \left| \left[ \mathbf {x} _ {v} ^ {(t)} - \mathbf {z} _ {t} \right] _ {i} \right| < \epsilon\right) \to 1 a s n \to \infty +$$ + +We are ready to present the proofs of the statements. The proof of the key lemma works in a similar way to the GCN case. + +Proof of Lemma B.3. Let $C$ be the Lipschitz constant for $\sigma$ . Start by considering the first layer preactivations $\mathbf{y}_v^{(1)}$ and drop superscript (1)'s for notational for clarity. We have that: + +$$ +\mathbf {y} _ {v} = \frac {1}{| \mathcal {N} ^ {+} (v) |} W _ {n} \sum_ {v \in \mathcal {N} ^ {+} (u)} \mathbf {x} _ {v} ^ {(0)} + \frac {1}{n} W _ {r} \sum_ {u \in V} \mathbf {x} _ {u} ^ {(0)} + \boldsymbol {b} +$$ + +Fix $i \in \{1, \dots, d(1)\}$ . We can bound the deviation from the expected value as follows: + +$$ +| [ \mathbf {y} _ {v} - \mathbb {E} [ \mathbf {y} _ {v} ] ] _ {i} | \leq \frac {1}{| \mathcal {N} ^ {+} (v) |} \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| + \frac {1}{n} \left| \sum_ {u \in V} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| +$$ + +By Lemma A.2 and Lemma A.3 both each $\left[\boldsymbol{W}_n\mathbf{x}_u^{(0)} - \mathbb{E}\left[\boldsymbol{W}_n\mathbf{x}_u^{(0)}\right]\right]_i$ and each $\left[\boldsymbol{W}_n\mathbf{x}_u^{(0)} - \mathbb{E}\left[\boldsymbol{W}_n\mathbf{x}_u^{(0)}\right]\right]_i$ are sub-Gaussian. We can therefore apply Hoeffding's Inequality to their sums. First, by Theorem A.1 there is a constant $K_{\mathrm{g}}$ such that for any $t > 0$ : + +$$ +\begin{array}{l} \mathbb {P} \left(\frac {1}{n} \left| \sum_ {u \in V} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \leq t\right) = \mathbb {P} \left(\left| \sum_ {u \in V} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \leq t n\right) \\ \leq 2 \exp \left(- \frac {K _ {\mathrm {g}} t ^ {2} n ^ {2}}{n}\right) \\ = 2 \exp \left(- K _ {\mathrm {g}} t ^ {2} n\right) \\ \end{array} +$$ + +Second, applying Theorem A.1 again there is a constant $K_{\mathrm{n}}$ such that for any $t > 0$ : + +$$ +\begin{array}{l} \mathbb {P} \left(\frac {1}{| \mathcal {N} ^ {+} (v) |} \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \leq t\right) \\ = \mathbb {P} \left(\left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \leq t | \mathcal {N} ^ {+} (v) |\right) \\ \leq 2 \exp \left(- \frac {K _ {\mathrm {n}} t ^ {2} | \mathcal {N} ^ {+} (v) | ^ {2}}{n}\right) \\ \end{array} +$$ + +Now $|\mathcal{N}(v)|$ is a sum of $n$ independent 0-1 Bernoulli random variables with success probability $r$ . Hence, as in the proof of Lemma 4.7, by Hoeffding's Inequality (Theorem A.1) there is a constant $K_{0}$ such that for every $\gamma \in (0,1)$ : + +$$ +\mathbb {P} \left(\left| \mathcal {N} ^ {+} (v) \right| \geq \gamma n r\right) \leq 2 \exp \left(- \frac {K _ {0} \left((1 - \gamma) n r - 1\right) ^ {2}}{n}\right) +$$ + +We can then use the Law of Total Probability, partitioning on whether $|\mathcal{N}^{+}(v)| \geq \gamma nr$ , to get a bound as follows: + +$$ +\begin{array}{l} \mathbb {P} \left(\frac {1}{| \mathcal {N} ^ {+} (v) |} \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left(\boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right]\right) \right| \leq t\right) \\ \leq 2 \exp \left(- \frac {K _ {\mathrm {n}} t ^ {2} (\gamma n r) ^ {2}}{n}\right) + 2 \exp \left(- \frac {K _ {0} ((1 - \gamma) n r - 1) ^ {2}}{n}\right) \\ \end{array} +$$ + +From now on fix any $\gamma \in (0,1)$ . + +Finally let $z_{1} \coloneqq \sigma(\mathbb{E}[\mathbf{y}_{v}])$ for any $v$ (this is the same for every $v$ ). Applying the two bounds with $t = C\epsilon / 2$ we can bound the deviation of $\mathbf{x}_{v}$ from $z_{1}$ as follows, using the Lipschitz constant $C$ . + +$$ +\begin{array}{l} \mathbb {P} \left(\left| \left[ \mathbf {x} _ {v} - \boldsymbol {z} _ {1} \right] _ {i} \right| \geq \epsilon\right) = \mathbb {P} \left(\left| \left[ \sigma (\mathbf {y} _ {v}) - \sigma \left(\mathbb {E} [ \mathbf {y} _ {v} ]\right) \right] _ {i} \right| \geq \epsilon\right) \\ \leq \mathbb {P} \left(\left| \left[ \mathbf {y} _ {v} - \mathbb {E} [ \mathbf {y} _ {v} ] \right] _ {i} \right| \geq C \epsilon\right) \\ \leq \left( \begin{array}{l} \mathbb {P} \left(\frac {1}{| \mathcal {N} ^ {+} (v) |} \left| \sum_ {u \in \mathcal {N} ^ {+} (v)} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \leq \frac {C \epsilon}{2}\right) \\ + \mathbb {P} \left(\frac {1}{n} \left| \sum_ {u \in V} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} - \mathbb {E} \left[ \boldsymbol {W} _ {n} \mathbf {x} _ {u} ^ {(0)} \right] \right] _ {i} \right| \leq \frac {C \epsilon}{2}\right) \end{array} \right) \\ \leq \left( \begin{array}{l} 2 \exp \left(- \frac {K _ {\mathrm {n}} (C \epsilon \gamma r) ^ {2} n}{4}\right) \\ + 2 \exp \left(- \frac {K _ {0} ((1 - \gamma) n r - 1) ^ {2}}{n}\right) \\ + 2 \exp \left(- K _ {\mathrm {g}} \frac {(C \epsilon) ^ {2} n}{4}\right) \end{array} \right) \\ \end{array} +$$ + +Taking a union bound, the probability that $|[{\bf x}_v - {\bf z}_1]_i| < \epsilon$ for every node $v$ and every $i\in \{1,\ldots ,d(1)\}$ is at least: + +$$ +1 - n d (i) \mathbb {P} \left(\left| \left[ \mathbf {x} _ {v} - \boldsymbol {z} _ {1} \right] _ {i} \right| \geq \epsilon\right) +$$ + +This tends to 1 as $n$ tends to infinity. + +Let us turn now to the second layer. By applying the above result for the first layer to $\epsilon' = \epsilon / (C \max \{\| \pmb{W}_n^{(2)} \|_{\infty}, \| \pmb{W}_r^{(2)} \|_{\infty}\}$ , we have that for each $i \in \{1, \ldots, d(2)\}$ : + +$$ +\mathbb {P} \left(\forall v: \left|\left[ \mathbf {x} _ {v} ^ {(1)} - \mathbf {z} _ {1} \right] _ {i} \right| < \epsilon^ {\prime}\right)\rightarrow 1 \quad \text {a s} n \rightarrow \infty +$$ + +Condition on the event that $|[{\bf x}_v^{(1)} - {\bf z}_1]_i| < \epsilon '$ for every node $v$ and every $i\in \{1,\ldots ,d(1)\}$ + +Fix $v$ and consider the second-layer preactivations: + +$$ +\mathbf {y} _ {v} ^ {(2)} = \frac {1}{| \mathcal {N} ^ {+} (v) |} W _ {n} ^ {(2)} \sum_ {u \in \mathcal {N} ^ {+} (v)} \mathbf {x} _ {u} ^ {(1)} + \frac {1}{n} W _ {r} ^ {(2)} \sum_ {u \in V} \mathbf {x} _ {u} ^ {(1)} + \boldsymbol {b} ^ {(2)} +$$ + +Define: + +$$ +\boldsymbol {a} _ {2} := \frac {1}{| \mathcal {N} ^ {+} (v) |} \boldsymbol {W} _ {n} ^ {(2)} \sum_ {u \in \mathcal {N} ^ {+} (v)} \boldsymbol {z} _ {1} + \frac {1}{n} \boldsymbol {W} _ {r} ^ {(2)} \sum_ {u \in V} \boldsymbol {z} _ {1} + \boldsymbol {b} ^ {(2)} +$$ + +Fix $i\in \{1,\ldots ,d(2)\}$ . Then: + +$$ +\begin{array}{l} \left| \left[ \mathbf {y} _ {v} ^ {(2)} - \boldsymbol {a} _ {2} \right] _ {i} \right| = \left| \frac {1}{| \mathcal {N} ^ {+} (v) |} \boldsymbol {W} _ {n} ^ {(2)} \sum_ {u \in \mathcal {N} ^ {+} (v)} \left(\mathbf {x} _ {u} ^ {(1)} - \boldsymbol {z} _ {1}\right) + \frac {1}{n} \boldsymbol {W} _ {r} ^ {(2)} \sum_ {u \in V} \left(\mathbf {x} _ {u} ^ {(1)} - \boldsymbol {z} _ {1}\right) \right| \\ \leq \frac {1}{| \mathcal {N} ^ {+} (v) |} \left\| \boldsymbol {W} _ {n} ^ {(2)} \right\| _ {\infty} \sum_ {u \in \mathcal {N} ^ {+} (v)} \left\| \mathbf {x} _ {u} ^ {(1)} - \boldsymbol {z} _ {1} \right\| _ {\infty} + \frac {1}{n} \left\| \boldsymbol {W} _ {r} ^ {(2)} \right\| _ {\infty} \sum_ {u \in V} \left\| \mathbf {x} _ {u} ^ {(1)} - \boldsymbol {z} _ {1} \right\| _ {\infty} \\ \leq \frac {\epsilon}{C} \\ \end{array} +$$ + +Let $z_{2} \coloneqq \sigma(a_{2})$ . Then we can use the Lipschitz continuity of $\sigma$ to bound the deviation of the activation from $z_{2}$ as follows. + +$$ +\left| \left[ \mathbf {x} _ {v} ^ {(2)} - \mathbf {z} _ {2} \right] _ {i} \right| = \left| \left[ \sigma \left(\mathbf {y} _ {v} ^ {(2)}\right) - \sigma (\mathbf {a} _ {2}) \right] _ {i} \right| \leq C \left| \left[ \mathbf {y} _ {v} ^ {(2)} - \mathbf {a} _ {2} \right] _ {i} \right| \leq \epsilon +$$ + +Since the probability that $|[\mathbf{x}_v^{(1)} - \mathbf{z}_1]_i| < \epsilon'$ for every node $v$ and every $i \in \{1, \dots, d(1)\}$ tends to 1, we get that the probability that $|[\mathbf{x}_v^{(2)} - \mathbf{z}_2]_i| < \epsilon$ for every node $v$ and every $i \in \{1, \dots, d(2)\}$ also tends to 1. + +Finally we apply the above argument inductively through all layers to get the desired result. + +The proof of the main result now proceeds as in the proof of Theorem 4.6. + +Proof of Theorem B.2. By Lemma B.3 the final node embeddings $\mathbf{x}_v^{(T)}$ deviate less and less from $z_{T}$ as the number of nodes $n$ increases. Therefore, the average-pooled graph-level representation also deviates less than less from $z_{T}$ . By inspecting the proof, we can see that this $z_{T}$ is exactly the vector $\pmb{\mu}_{T}$ in the definition of non-splitting (Definition B.1). This means that $\pmb{z}_{T}$ cannot lie on a decision boundary for the classifier $\mathfrak{C}$ . Hence, there is $\epsilon > 0$ such that $\mathfrak{C}$ is constant on: + +$$ +\left\{\boldsymbol {x} \in \mathbb {R} ^ {d (T)} \mid \forall i \in \{1, \dots , d (T) \}: [ \boldsymbol {z} _ {T} - \boldsymbol {x} ] _ {i} < \epsilon \right\} +$$ + +Therefore, the probability that the output of $\mathcal{M}$ is $\mathfrak{C}(z_T)$ tends to 1 as $n$ tends to infinity. + +![](images/89a6784598ced90a794653f7de49f3bbc38f0e233595acd8ca69a280ca4f7e8b.jpg) + +# C Proof of the zero-one law for $\mathrm{SUMGNN^{+}}$ + +The proof of the key lemma works rather differently to the GCN and MEANGNN $^{+}$ case, but we still make important use of Hoeffding's Inequality. + +Proof of Lemma 4.11. Consider the first layer preactivations $\mathbf{y}_v^{(1)}$ and drop superscript (1)'s for notational clarity. We can rearrange the expression as follows: + +$$ +\mathbf {y} _ {v} = \left(\boldsymbol {W} _ {s} + \boldsymbol {W} _ {g}\right) \mathbf {x} _ {v} ^ {(0)} + \left(\boldsymbol {W} _ {n} + \boldsymbol {W} _ {g}\right) \sum_ {u \in \mathcal {N} (v)} \mathbf {x} _ {u} ^ {(0)} + \boldsymbol {W} _ {g} \sum_ {u \in V \backslash \mathcal {N} ^ {+} (v)} \mathbf {x} _ {u} ^ {(0)} + \boldsymbol {b} +$$ + +For $u, v \leq n$ define: + +$$ +\mathbf {w} _ {u, v} = (\mathbf {A} _ {u v} \boldsymbol {W} _ {n} + \boldsymbol {W} _ {g}) \mathbf {x} _ {u} ^ {(0)} 1 _ {u \neq v} + (\boldsymbol {W} _ {s} + \boldsymbol {W} _ {g}) \mathbf {x} _ {u} ^ {(0)} 1 _ {u = v} +$$ + +Using this, we can rewrite: + +$$ +\mathbf {y} _ {v} = \sum_ {u = 1} ^ {n} \mathbf {w} _ {u, v} + \boldsymbol {b} +$$ + +By assumption on the distribution from which we draw graphs with node features, the $\mathbf{w}_{u,v}$ 's are independent for any fixed $v$ . + +Now fix $i \in \{1, \dots, d(1)\}$ . By Lemma A.2 and Lemma A.4 we have that each $[\mathbf{w}_{u,v}]_i$ is sub-Gaussian. We therefore apply Hoeffding's Inequality to the sum. Note that $\mathbf{w}_{u,v}$ can have one of two (sub-Gaussian) distributions, depending on whether $u = v$ . Therefore, by Theorem A.1 and Lemma A.3, there are constants $c$ and $K$ such that, no matter how many nodes $n$ there are, we have that: + +$$ +\mathbb {P} (| [ \mathbf {y} _ {v} ] _ {i} - \mathbb {E} [ \mathbf {y} _ {v} ] _ {i} | \geq t) = \mathbb {P} \left(\left| \sum_ {u = 1} ^ {n} ([ \mathbf {w} _ {u, v} ] _ {i} - \mathbb {E} [ \mathbf {w} _ {u, v} ] _ {i}) \right| \geq t\right) \leq 2 \exp \left(- \frac {c t ^ {2}}{K ^ {2} n}\right) +$$ + +Let's now compute $\mathbb{E}[\mathbf{y}_v]$ , by first computing $\mathbb{E}[\mathbf{w}_{u,v}]$ . When $u = v$ we have that: + +$$ +\begin{array}{l} \mathbb {E} [ \mathbf {w} _ {v, v} ] = \mathbb {E} [ (\boldsymbol {W} _ {s} + \boldsymbol {W} _ {g}) \mathbf {x} _ {v} ^ {(0)} ] \\ = \left(\boldsymbol {W} _ {s} + \boldsymbol {W} _ {g}\right) \mathbb {E} \left[ \mathbf {x} _ {v} ^ {(0)} \right] \\ = \left(\boldsymbol {W} _ {s} + \boldsymbol {W} _ {g}\right) \boldsymbol {\mu} \\ \end{array} +$$ + +When $u \neq v$ we have, using the independence of $\mathbf{A}_{uv}$ and $\mathbf{x}_v$ : + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathbf {w} _ {u, v} \right] = \mathbb {E} \left(\left[ \mathbf {A} _ {u v} \boldsymbol {W} _ {n} + \boldsymbol {W} _ {g}\right) \mathbf {x} _ {u} ^ {(0)} \right] \\ = \left(\mathbb {E} \left[ \mathbf {A} _ {u v} \right] \boldsymbol {W} _ {n} + \boldsymbol {W} _ {g}\right) \mathbb {E} \left[ \boldsymbol {x} _ {v} ^ {(0)} \right] \\ = \left(r \boldsymbol {W} _ {n} + \boldsymbol {W} _ {g}\right) \boldsymbol {\mu} \\ \end{array} +$$ + +Therefore (separating $\mathbf{w}_{v,v}$ from $\mathbf{w}_{u,v}$ for $u\neq v$ ): + +$$ +\mathbb {E} [ \mathbf {y} _ {v} ] = \sum_ {u = 1} ^ {n} \mathbb {E} [ \mathbf {w} _ {u, v} ] + \boldsymbol {b} = (n - 1) (r \boldsymbol {W} _ {n} + \boldsymbol {W} _ {g}) \boldsymbol {\mu} + (\boldsymbol {W} _ {s} + \boldsymbol {W} _ {g}) \boldsymbol {\mu} + \boldsymbol {b} +$$ + +Since $\mathcal{M}$ is synchronously saturating for $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ , we know that $[(r\pmb {W}_n + \pmb {W}_g)\pmb {\mu}]_i\neq 0$ . Assume without loss of generality that $[(r\pmb {W}_n + \pmb {W}_g)\pmb {\mu}]_i > 0$ . Then the expected value of $[\mathbf{y}_v]_i$ increases as $n$ tends to infinity; moreover we have a bound on how much $[\mathbf{y}_v]_i$ can vary around its expected value. + +Recall that the non-linearity $\sigma$ is eventually constant in both directions. In particular, it is constant with value $\sigma_{\infty}$ above some $x_{\infty}$ . When $\mathbb{E}[\mathbf{y}_v]_i > x_{\infty}$ the probability that $[\mathbf{y}_v]_i$ doesn't surpass this threshold is: + +$$ +\begin{array}{l} \mathbb {P} \left(\left[ \mathbf {y} _ {v} \right] _ {i} < x _ {\infty}\right) \leq \mathbb {P} \left(\left| \left[ \mathbf {y} _ {v} \right] _ {i} - \mathbb {E} \left[ \mathbf {y} _ {v} \right] _ {i} \right| > \left| x _ {\infty} - \mathbb {E} \left[ \mathbf {y} _ {v} \right] _ {i} \right|\right) \\ \leq 2 \exp \left(- \frac {c \left| x _ {\infty} - \mathbb {E} \left[ \mathbf {y} _ {v} \right] _ {i} \right| ^ {2}}{K ^ {2} n}\right) \\ \end{array} +$$ + +There is a constant $\rho$ such that $|x_{\infty} - \mathbb{E}[\mathbf{y}_v]_i|\geq \rho n$ . Hence for sufficiently large $n$ (i.e. such that $\mathbb{E}[\mathbf{y}_v]_i > x_\infty$ ): + +$$ +\mathbb {P} ([ \mathbf {y} _ {v} ] _ {i} < x _ {\infty}) \leq 2 \exp \left(- \frac {c \rho^ {2} n ^ {2}}{K ^ {2} n}\right) = 2 \exp \left(- \frac {c \rho^ {2} n}{K ^ {2}}\right) +$$ + +Since the activation $[\mathbf{x}_v]_i = \sigma ([\mathbf{y}_v]_i)$ , the probability that $[\mathbf{x}_v]_i$ takes value $\sigma_{\infty}$ is at least $1 - 2\exp \left(-c\rho^{2}n / K^{2}\right)$ . Now, for each node $v$ and each $i\in \{1,\dots ,d(1)\}$ , the activation $[\mathbf{x}_v]_i$ is either $\sigma_{\infty}$ with high probability or $\sigma_{-\infty}$ with high probability. By taking a union bound, for sufficiently large $n$ the probability that every $[\mathbf{x}_v]_i$ takes its corresponding value is at least: + +$$ +1 - 2 n d (1) \exp \left(- \frac {c \rho^ {2} n}{K ^ {2}}\right) +$$ + +This tends to 1 as $n$ tends to infinity. In other words, there is $z_{1} \in \{\sigma_{-\infty}, \sigma_{\infty}\}^{d(1)}$ such that $\mathbf{x}_v^{(1)} = z_1$ for every $v$ asymptotically. + +We now proceed to the second layer, and condition on the event that $\mathbf{x}_v^{(1)} = z_1$ for every $v$ . In this case, we have that the second layer preactivation for node $v$ is as follows. + +$$ +\mathbf {y} _ {v} ^ {(2)} = (\pmb {W} _ {s} ^ {(2)} + | \mathcal {N} (v) | \pmb {W} _ {n} ^ {(2)} + n \pmb {W} _ {g} ^ {(2)}) \pmb {z} _ {1} + \pmb {b} ^ {(2)} +$$ + +Since we're in the situation where every $\mathbf{x}_u^{(1)} = z_1$ , the degree $|\mathcal{N}(v)|$ is simply binomially distributed $\mathrm{Bin}(n,r)$ . The preactivation $\mathbf{y}_v^{(2)}$ then has expected value: + +$$ +\mathbb {E} \left[ \mathbf {y} _ {v} ^ {(2)} \right] = n \left(r \boldsymbol {W} _ {n} ^ {(2)} + \boldsymbol {W} _ {g} ^ {(2)}\right) \boldsymbol {z} _ {1} + \boldsymbol {W} _ {s} ^ {(2)} \boldsymbol {z} _ {1} + \boldsymbol {b} ^ {(2)} +$$ + +Fix $i \in \{1, \ldots, d(2)\}$ . Since $\mathcal{M}$ is synchronously saturating for $\mathbb{G}(n,r)$ and $\mathbb{D}(d)$ , we have that $[(r\pmb{W}_n^{(2)} + \pmb{W}_g^{(2)})\pmb{z}_1]_i \neq 0$ . Assume without loss of generality that $[(r\pmb{W}_n^{(2)} + \pmb{W}_g^{(2)})\pmb{z}_1]_i > 0$ . Then $\mathbb{E}[\mathbf{y}_v^{(2)}]_i$ tends to infinity as $n$ increases. + +Furthermore, we can view $[n(rW_n^{(2)} + W_g^{(2)})z_1]_i$ as the sum of $n$ Bernoulli random variables (which take values $[(W_n^{(2)} + W_g^{(2)})z_1]_i$ and $[W_g^{(2)}z_1]_i$ ). Since by Lemma A.4 Bernoulli random variables are sub-Gaussian, as in the first-layer case we can apply Hoeffding's Inequality to bound the probability that $[\mathbf{y}_v^{(2)}]_i$ is less than $x_{\infty}$ . We get that, for sufficiently large $n$ , there is a constant $K$ such that this probability is bounded by $2\exp (-Kn)$ . + +Then, as before, we find $z_2 \in \{\sigma_{-\infty}, \sigma_{\infty}\}^{d(2)}$ such that, for sufficiently large $n$ , every $\mathbf{x}_v^{(2)} = z_2$ with probability at least $1 - 2nd\exp(-Kn)$ . + +Finally, this argument is applied inductively through all layers. As the number of layers remains constant (since $\mathcal{M}$ is fixed), we find that the node embeddings throughout the model are asymptotically constant. + +With the key lemma in place, we can now prove the main theorem. + +Proof of Theorem 4.10. Applying Lemma 4.11 to the final layer, we find $\mathbf{z}_T \in \{\sigma_{-\infty}, \sigma_{\infty}\}^{d(T)}$ such that every $\mathbf{x}_v^{(T)} = \mathbf{z}_T$ with probability tending to 1. Since we use either average or component-wise maximum pooling, then means that the final graph-level representation is asymptotically constant, and thus the output of the classifier must be asymptotically constant. + +# D Proof of the uniform expressive power of $\mathbf{SUMGNN}^+$ with random features + +We make use of a result due to Abboud et al. [1] which shows that $\mathrm{SUMGNN^{+}}$ models with random features can approximate any graph invariant on graphs with a fixed number of nodes. + +Definition D.1. Let $f$ be a function on graphs, and let $\zeta$ be a random function on graphs. Take $\delta > 0$ and $N \in \mathbb{N}$ . Then $\zeta$ $\delta$ -approximates $f$ up to $N$ if: + +$$ +\forall n \leq N: \mathbb {P} (\zeta (G) = f (G) \mid | G | = n) \geq 1 - \delta +$$ + +For completeness, we state the definition of the linearized sigmoid here. + +Definition D.2. The linearized sigmoid is the function $\mathbb{R}\to \mathbb{R}$ defined as follows: + +$$ +x \mapsto \left\{ \begin{array}{l l} - 1 & \text {i f} x \in (- \infty , - 1), \\ x & \text {i f} x \in [ - 1, 1), \\ 1 & \text {o t h e r w i s e}. \end{array} \right. +$$ + +Theorem D.3. Let $\xi$ be any graph invariant. For every $N\in \mathbb{N}$ and $\delta >0$ there is a $\mathrm{SUMGNN}^+$ with random features $\mathcal{M}$ which $\delta$ -approximates $\xi$ up to $N$ . Moreover, $\mathcal{M}$ uses the linearized sigmoid as the non-linearity and the distribution of the initial node embeddings consists of $d$ iid $U[0,1]$ random variables. + +Proof. See [1, Theorem 1]. + +![](images/48a4d367159dbfc54fc5a4766a1b9062c002026cc00f4a69c13ae2d6b67788e4.jpg) + +With this result we can now prove the uniform expressivity result. + +Proof of Theorem 5.2. First, $\xi$ satisfies a zero-one law for $\mathbb{G}(n,1 / 2)$ . Without loss of generality assume that $\xi$ is asymptotically 1. There is $N\in \mathbb{N}$ such that for every $n > N$ we have: + +$$ +\mathbb {P} (\xi (G) = 1 \mid G \sim \mathbb {G} (n, 1 / 2)) \geq 1 - \delta +$$ + +Note that this $N$ depends on both $\xi$ and $\delta$ . + +Second, by Theorem D.3 there is a $\mathrm{SUMGNN}^+$ with random features $\mathcal{M}$ which $\delta$ -approximates $\xi$ up to $N$ . Moreover, $\mathcal{M}'$ uses the linearized sigmoid as the non-linearity and the distribution of the initial node embeddings consists of $d$ iid $U[0,1]$ random variables. + +Using the global readout and the linearized sigmoid, we can condition the model behavior on the number of nodes. We give a rough description of the model as follows. Define a $\mathrm{SUMGNN}^+$ with random features $\mathcal{M}$ by extending $\mathcal{M}'$ as follows. + +- Increase the number of layers to at least three. +- Increase each embedding dimension by 1. For convenience call this the 0th component of each embedding. +- Use the bias term in the first layer to ensure that the 0th component of the activation $\mathbf{x}_v^{(1)}$ for each node $v$ is 1. + +- Use the global readout to threshold the number of nodes on $N$ . The 0th row of the matrix $W_{g}^{(2)}$ should have a 2 in the 0th position and 0's elsewhere. The 0th component of the bias vector $b^{(2)}$ should be $2N - 1$ . This ensures that the 0th component of every activation $\mathbf{x}_v^{(2)}$ is 1 if $n > N$ and -1 otherwise. +- Propagate this value through the 0th component of each layer embedding. +- In the final layer, use this value to decide whether to output what $\mathcal{M}'$ would output, or simply to output 1. + +For any $n \leq N$ the model $\mathcal{M}$ behaves like $\mathcal{M}'$ . Therefore: + +$$ +\mathbb {P} (\xi (G) = \mathcal {M} (G) \mid | G | = n) \geq 1 - \delta +$$ + +On the other hand, for $n > N$ the model $\mathcal{M}$ simply outputs 1 and so: + +$$ +\mathbb {P} (\xi (G) = \mathcal {M} (G) \mid | G | = n) = \mathbb {P} (\xi (G) = 1 \mid | G | = n) \geq 1 - \delta +$$ + +Thus $\mathcal{M}$ uniformly $\delta$ -approximates $\xi$ . + +![](images/5c1c66c8c0ff7090535eed53b110392a8a9bc2b1d0ebefa8f042653d60a63ed2.jpg) + +# E Further Experiments + +In this section, we focus on GCNs and provide further experiments regarding our results. In particular, we pose the following questions: + +1. Our theoretical results entail a zero-one law for a large class of distributions: do we empirically observe a zero-one law when node features are instead drawn from a normal distribution (Appendix E.1)? +2. Our theoretical results state a zero-one law for a large class of non-linearities: do we empirically observe a zero-one law when considering other common non-linearities (Appendix E.2)? +3. Does a zero-one law also manifest itself empirically for GAT models (Appendix E.3)? +4. Do we empirically observe a zero-one law if we were to consider sparse Erdős-Rényi graphs (Appendix E.4)? +5. Is there empirical evidence for our results to apply to other random graph models, such as the Barabási-Albert model (Appendix E.5)? + +# E.1 Experiments with initial node features drawn from a normal distribution + +![](images/e04fb65655b93d6370deec0ee9e746f8824916e1bd85b98e361c916f1634a98b.jpg) +Figure 2: Normally distributed random node features with GCN models. Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCN models. Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). We draw the initial features randomly from a normal distribution with mean 0.5 and standard deviation 1. + +![](images/5dd9e6c51728ddf6ecc041d3121fdc6efb331034bb270c0c0eaf7a5355f6a234.jpg) + +![](images/4629237795f2d05c500d9ac9947e5683f51a457f3261d91fbe8eaed8fb6fd3cd.jpg) + +Here we consider using a normal distribution to draw our initial node features. Note that normal distributions are sub-Gaussian, and hence our theoretical findings (Theorem 4.6) confer a zero-one law in this case. Figure 2 demonstrates the results for GCN models. We observe the expected asymptotic behavior in most cases, however in the two and three layer cases a few models have not converged by the end of the experiment. + +# E.2 Experiments with other non-linearities + +In this subsection we test the effect of using different non-linearities in the layers of our GCN models. Theorem 4.6 applies in all of these cases, so we do expect to see a zero-one law. Figures 3 to 5 present the results for ReLU, tanh and sigmoid, respectively. We see the expected behavior in all cases. Note however that, in contrast with other non-linearities, when we use sigmoid we observe that the rate of convergence actually increases as the number of layers increases. This suggests a complex relationship between the rate of convergence, the non-linearity and the number of layers. + +![](images/64633c2e170e019f4b69621943bfb85ef10cb447da3725593e140311b71a163b.jpg) +Figure 3: GCN models with ReLU non-linearity. Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCN models. Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). This time we choose the ReLU activation function for the GNN layers. Apart from this, the setup is the same as in the main body of the paper. + +![](images/b81b0621c0f2fc17c977d03e8f420953c8477466917fb097fc4907558b2406d5.jpg) + +![](images/b2828127e4b7a2ff377fd0aa553bc95559bc184787466f669b9495b2aafd53af.jpg) + +![](images/9a13b93ebf05255c6f6779b448bbcd4c7edc18c92567e589a8d9ef4573a140c3.jpg) +Figure 4: GCN models with tanh non-linearity. Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCN models. Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). We use tanh as an activation function for the GNN layers, and keep everything else the same. + +![](images/a10a4b24e3f15070769432f41a2892257457a8a84bf05bf10e3be11d42e2c363.jpg) + +![](images/7edfe13a12a5a7eced9c1f7aff104c13463bd5e8ccf0874d75a80d2cae7e017a.jpg) + +![](images/70fd605387f7e0c3201ce2c43149f4f2983b647ef32137aa4700084a8848a088.jpg) +Figure 5: GCN models with sigmoid non-linearity. Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCN models. Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). We use the sigmoid activation function for the GNN layers, and keep everything else the same. + +![](images/da91221987bd85c4fa1e752b296ac1a79c9287cd6c09f226f727473652d022ec.jpg) + +![](images/5f4faaae7d99ab4e0a40014710f8441113384b0b5b761cf65a45076c3e042a7b.jpg) + +# E.3 Experiments with GAT + +Here we investigate the asymptotic behaviour of a GNN architecture not considered in the main body: the Graph Attention Network [37]. Cast as an MPNN, this architecture uses an attention mechanism as the aggregate function $\phi$ in the message passing step. The techniques used in this paper to establish a zero-one law for other GNN architectures do not easily extend to GAT. However, our experiments demonstrate a very quick convergence to 0 or 1. + +![](images/99405a689928fb58c95e979a7934cf114d3f0c0a835219a421e68adb22b54f93.jpg) +Figure 6: Ten GAT models with number of layers 1, 2 and 3 are run on graphs of increasing size, with the proportion of nodes classified as 1 recorded. We observe a convergence to zero-one law very quickly. + +![](images/41d3aecb4f3a9d7fec021394654f38aad6a2b776574ad05c48a8642134a92c48.jpg) + +![](images/c85bff4b036643a24591c9da90daa8d090a29a9d8c5e9293032e37288f814278.jpg) + +# E.4 Experiments on sparse Erdős-Rényi graphs + +In these experiments, we consider GCN models on a variation of the Erdős-Rényi distribution in which the edge probability $r$ is allowed to vary as a function of $n$ . Specifically, we set $r = \log (n) / n$ , which yields sparser graphs than in the standard distribution. Our experiments provide evidence for a zero-one law also in this case (Figure 7). + +![](images/4e1e6b1da481b62439096ba0a2365286955b99253e4514f46c1d0614b2bef8e3.jpg) +Figure 7: Sparse Erdős-Rényi graphs with GCN models. Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCN models. Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). We let the probability $r$ of an edge appearing be $\frac{\log(n)}{n}$ . All the other parameters are the same as in the experiments of the main body of the paper. + +![](images/9da100aa7ab4f224586b9660dbbae80d2ecca146a64ade901394b5623980ea3c.jpg) + +![](images/5b85c82788adda031276d676fa5b1a7b9bbfcd8b303b4ae39fe41aa85939f246.jpg) + +# E.5 Experiments on the Barabási-Albert random graph model + +In this subsection, we consider another alternative graph distribution: the Barabási-Albert model [3]. This model aims to better capture the degree distributions commonly found in real-world networks. We can again observe a zero-one law for GCN models under this distribution (Figure 8). + +![](images/20e1d9838d4bd9cf392118666ea5e31fb9efec1481f98c33ccff5c111b9c494b.jpg) +Figure 8: Barabási-Albert graphs with GCN models. Each plot shows the proportion of graphs of certain size which are classified as 1 by a set of ten GCN models. Each curve (color-coded) shows the behavior of a model, as we draw increasingly larger graphs. The phenomenon is observed for 1-layer models (left column), 2-layer models (mid column), and 3-layer models (last column). We generate the graphs using the Barabási-Albert model; apart from this the setup is the main as in the experiments in the main body of the paper. + +![](images/2b852d69fd05b6bd4a9f03a79819c9910300df754140ebbfbad2cfed5c76f3b9.jpg) + +![](images/7ec25883921370578a1c523425d77093c1fb5ce1582d34e430a3d03404aed124.jpg) \ No newline at end of file diff --git a/zeroonelawsofgraphneuralnetworks/images.zip b/zeroonelawsofgraphneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c8073aecae97e2f6a1b597a18bf050330bb1b81 --- /dev/null +++ b/zeroonelawsofgraphneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4bb9ea0a6b4fd1f0329378536511b1018e97a4bbf0e1dec03f1f451f1e0d262 +size 799826 diff --git a/zeroonelawsofgraphneuralnetworks/layout.json b/zeroonelawsofgraphneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..746df691118a492142fb10cbd93d1ca1facf73a8 --- /dev/null +++ b/zeroonelawsofgraphneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d40a4b72c1b74a207dd4222eb29e1dcd920a2826c420451a8530e1aa4e3eb6f2 +size 1112846 diff --git a/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_content_list.json b/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a7f9bea177ce1e7b860a1733ade827543584f64d --- /dev/null +++ b/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4066a8ca4f29a09b90addddf367a9c17fc9e1e816287311e5d40ee4e0bd77ce5 +size 84100 diff --git a/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_model.json b/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b795bf495c523af20a510d4869166fbbe5e417d4 --- /dev/null +++ b/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:189f9ff65cde6421941dc6953e0a6a4dffccf293c4dc5fb013a70eb0ff1ef003 +size 101540 diff --git a/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_origin.pdf b/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3415cdab050355873aa9168c1ba5faa8dc080223 --- /dev/null +++ b/zeroregretperformativepredictionunderinequalityconstraints/18566c59-e24f-4729-bb72-339a98ff0cc2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54fb7045290944186cc2cb347cb708473d183dcb4df4bcce08fd5b748c5c2b80 +size 543867 diff --git a/zeroregretperformativepredictionunderinequalityconstraints/full.md b/zeroregretperformativepredictionunderinequalityconstraints/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a3325ff9a927f24779a03cec2658a57d42fad5e0 --- /dev/null +++ b/zeroregretperformativepredictionunderinequalityconstraints/full.md @@ -0,0 +1,366 @@ +# Zero-Regret Performative Prediction Under Inequality Constraints + +Wenjing Yan Xuanyu Cao * + +Department of Electronic and Computer Engineering + +The Hong Kong University of Science and Technology + +wj.yan@connect.ust.hk, eexcao@ust.hk + +# Abstract + +Performative prediction is a recently proposed framework where predictions guide decision-making and hence influence future data distributions. Such performative phenomena are ubiquitous in various areas, such as transportation, finance, public policy, and recommendation systems. To date, work on performative prediction has only focused on unconstrained scenarios, neglecting the fact that many real-world learning problems are subject to constraints. This paper bridges this gap by studying performative prediction under inequality constraints. Unlike most existing work that provides only performative stable points, we aim to find the optimal solutions. Anticipating performative gradients is a challenging task, due to the agnostic performative effect on data distributions. To address this issue, we first develop a robust primal-dual framework that requires only approximate gradients up to a certain accuracy, yet delivers the same order of performance as the stochastic primal-dual algorithm without performativity. Based on this framework, we then propose an adaptive primal-dual algorithm for location family. Our analysis demonstrates that the proposed adaptive primal-dual algorithm attains $\mathcal{O}(\sqrt{T})$ regret and constraint violations, using only $\sqrt{T} + 2T$ samples, where $T$ is the time horizon. To our best knowledge, this is the first study and analysis on the optimality of the performative prediction problem under inequality constraints. Finally, we validate the effectiveness of our algorithm and theoretical results through numerical simulations. + +# 1 Introduction + +Stochastic optimization plays a critical role in statistical sciences and data-driven computing, where the goal is to learn decision rules (e.g., classifiers) based on limited samples that generalize well to the entire population. Most prior studies on stochastic optimization [Heyman and Sobel, 2004; Karimi et al., 2019; Powell, 2019] rely on the assumption that the data of the entire population follows a static distribution. This assumption, however, does not hold in applications where the data distributions change dynamically in response to decision-makers' actions [Hardt et al., 2016; Dong et al., 2018]. For instance, in transportation, travel time estimates [Mori et al., 2015] influence routing decisions, resulting in realized travel times; in banking, credit evaluation criteria [Abdou and Pointon, 2011] guide borrowers' behaviors and subsequently their credit scores; and in advertising, recommendations [García-Sánchez et al., 2020] shape customer preferences, leading to consumption patterns. Such interplay between decision-making and data distribution arises widely in various areas, such as transportation, finance, public policy, and recommendation systems. + +The seminal work [Perdomo et al., 2020] formalized the phenomenon as performative prediction, which represents the strategic responses of data distributions to the taken decisions via decision- + +dependent distribution maps [Quinonero-Candela et al., 2008]. Since then, an increasing body of research has been dedicated to performative prediction problems. Most existing studies are focused on identifying performative stable points [Li and Wai, 2022; Li et al., 2022; Drusvyatskiy and Xiao, 2022; Brown et al., 2022; Mendler-Dünner et al., 2020; Wood et al., 2021; Ray et al., 2022], given the complexities of the decision-induced distribution shifts and the unknown decision-dependent distributions. The proposed algorithms typically iteratively retrain the deployed models until convergence. However, performative stability generally does not imply performative optimality. Aiming to achieve optimal performance, a few recent works designed effective algorithms by leveraging rich performative feedback [Jagadeesan et al., 2022], or by making some parametric assumptions on the underlying distribution maps. For instance, the distribution maps belong to the location family with linear structure [Miller et al., 2021] or are from the exponential family [Izzo et al., 2021]. + +All the aforementioned work on performative prediction is focused on unconstrained learning problems. However, in the real world, many performative prediction applications are subject to constraints [Detassis et al., 2021; Wood and Dall'Anese, 2022a]. Constraints can be used to ensure the satisfaction of desired properties, such as fairness, safety, and diversity. Examples include safety and efficiency constraints in transportation [Metz, 2021], relevance and diversity constraints in advertising [Khamis, 2020], and risk tolerance and portfolio constraints in financial trading [Föllmer and Schied, 2002], etc. In addition, constraints can serve as side information to enhance the learning outcomes, e.g., by narrowing the scope of exploration or by incorporating prior knowledge Serafini and Garcez [2016]; Wu et al. [2018]. As performative shifts can rarely be analyzed offline, incorporating constraints on what constitutes safe exploration [Turchetta et al., 2019] or facilitates optimization [Wood and Dall'Anese, 2022b] is of crucial importance. + +Despite its importance, research on performative prediction under constraints has been neglected so far. Although some work [Izzo et al., 2021; Piliouras and Yu, 2022] restricted decision variables to certain regions, this feasible set restriction was simply handled by projections. This paper bridges this gap by studying the performative prediction problem under inequality constraints, for which simple projection is inadequate to handle. Unlike most existing work that provides only performative stable points, we aim to find the optimal solutions. As aforementioned, finding performative optima is a challenging task because we now need to anticipate performative effect actively rather than simply retrain models in a myopic manner. + +However, the performative effect on distribution maps is unknown, which hinders the computation of exact performative gradient. To solve this problem, we develop a robust primal-dual framework that admits inexact gradients. We ask the following questions: How does the gradient approximation error affect the performance of the primal-dual framework? Under what accuracy can the approximate gradients maintain the performance order of the stochastic primal-dual algorithm without performativity? How to construct effective gradient approximations that attain the desired accuracy? We answer the above thoroughly. Our idea hinges on enhancing gradient approximation with the parametric knowledge of distribution maps. In particular, we follow existing studies [Miller et al., 2021; Jagadeesan et al., 2022] and focus on the family of location maps. Location family exhibits a favorable linear structure for algorithm development while maintaining broad generality to model many real-world applications. Distribution maps of this type are ubiquitous throughout the performative prediction literature, such as strategic classification [Hardt et al., 2016; Perdomo et al., 2020], linear regression [Miller et al., 2021], email spam classification [Li et al., 2022], ride-share [Narang et al., 2022], among others. Nevertheless, we emphasize that our robust primal-dual framework is applicable to other forms of distributions with effective gradient approximation methods. + +To our best knowledge, this paper provides the first study and analysis on the optimality of performative prediction problems under inequality constraints. We highlight the following key contributions: + +- We develop a robust primal-dual framework that requires only approximate gradients up to an accuracy of $\mathcal{O}(\sqrt{T})$ , yet delivers the same order of performance as the stochastic primal-dual algorithm without performativity, where $T$ is the time horizon. Notably, the robust primal-dual framework does not restrict the approximate gradients to be unbiased and hence offers more flexibility to the design of gradient approximation. +- Based on this framework, we propose an adaptive primal-dual algorithm for location family which consists of an online stochastic approximation and an offline parameter estimation for the performative gradient approximation. Our analysis demonstrates that the proposed algorithm achieves $\mathcal{O}(\sqrt{T})$ regret and constraint violations, using only $\sqrt{T} + 2T$ samples + +Finally, we conduct experiments on two examples: multi-task linear regression and multi-asset portfolio. The numerical results validate the effectiveness of our algorithm and theoretical analysis. + +# 1.1 Related Work + +The study on performative prediction was initiated in [Perdomo et al., 2020], where the authors defined the notion of performative stability and demonstrated how performative stable points can be found through repeated risk minimization and stochastic gradient methods. Since then, substantial efforts have been dedicated to identifying performative stable points in various settings, such as single-agent [Mendler-Dünner et al., 2020; Drusvyatskiy and Xiao, 2022; Brown et al., 2022], multiagent [Li et al., 2022; Piliouras and Yu, 2022], games [Narang et al., 2022], reinforcement learning [Mandal et al., 2023], and online learning [Wood et al., 2021; Wood and Dall'Anese, 2022a]. + +A few recent works aimed to achieve performative optimality, a more stringent solution concept than performative stability. In [Miller et al., 2021], the authors evaluated the conditions under which the performative problem is convex and proposed a two-stage algorithm to find the performative optima for distribution maps in location family. Another paper on performative optimality is [Izzo et al., 2021], which proposed a PerfGD algorithm by exploiting the exponential structure of the underlying distribution maps. Both works took advantage of parametric assumptions on the distribution maps. Alternatively, [Jagadeesan et al., 2022] proposed a performative confidence bounds algorithm by leveraging rich performative feedback, where the key idea is to exhaustively explore the feasible region with an efficient discarding mechanism. + +A closely related work is [Wood and Dall'Anese, 2022b], which studied stochastic saddle-point problems with decision-dependent distributions. The paper focused on performative stable points (equilibrium points), whereas we aim at the performative optima, which is more challenging. Another difference is that [Wood and Dall'Anese, 2022b] only demonstrated the convergence of the proposed primal-dual algorithm in the limit, without providing an explicit finite-time convergence rate. In contrast, we provide $\mathcal{O}(\sqrt{T})$ regret and $\mathcal{O}(\sqrt{T})$ constraint violation bounds for the proposed algorithm in this paper. + +# 2 Problem Setup + +We study a performative prediction problem with loss function $\ell(\theta; Z)$ , where $\theta \in \Theta$ is the decision variable, $Z \in \mathbb{R}^k$ is an instance, and $\Theta \in \mathbb{R}^d$ is the set of available decisions. Different from in stationary stochastic optimization where distributions of instances are fixed, in performative prediction, the distribution of $Z$ varies with the decision variable $\theta$ , represented by $Z \sim \mathcal{D}(\theta)$ . In this paper, we consider that the decision variable $\theta$ is subject to a constraint $\mathbf{g}(\theta) \preceq \mathbf{0}$ , where $\mathbf{g}(\cdot): \Theta \to \mathbb{R}^m$ . The constraint $\mathbf{g}(\cdot)$ can be imposed on $\theta$ to ensure certain properties, such as fairness, safety, and diversity, or to incorporate prior knowledge. We assume that $\mathbf{g}(\cdot)$ is available at the decision-maker in advance of the optimization. Ideally, the goal of the decision-maker is to solve the following stochastic problem: + +$$ +\min _ {\boldsymbol {\theta} \in \boldsymbol {\Theta}} \quad \mathbb {E} _ {Z \sim \mathcal {D} (\boldsymbol {\theta})} \ell (\boldsymbol {\theta}; Z) \quad \text {s . t .} \quad \mathbf {g} (\boldsymbol {\theta}) \preceq \mathbf {0}, \tag {1} +$$ + +where $\mathbb{E}_{Z\sim \mathcal{D}(\pmb {\theta})}\ell (\pmb {\theta};Z)$ is referred to as performative risk, denoted by $\mathrm{PR}(\pmb {\theta})$ + +Problem (1) is, however, impossible to be solved offline because the distribution map $\mathcal{D}(\pmb{\theta})$ is unknown. Instead, the decision-maker needs to interact with the environment by making decisions to explore the underlying distributions. Given the online nature of this task, we measure the loss of a sequence of chosen decisions $\pmb{\theta}_1,\dots ,\pmb{\theta}_T$ by performative regret, defined as + +$$ +\operatorname {R e g} (T) := \sum_ {t = 1} ^ {T} \left(\mathbb {E} \left[ \operatorname {P R} \left(\boldsymbol {\theta} _ {t}\right) \right] - \operatorname {P R} \left(\boldsymbol {\theta} _ {\mathrm {P O}}\right)\right), +$$ + +where the expectation is taken over the possible randomness in the choice of $\{\pmb{\theta}_t\}_{t=1}^T$ , and $\pmb{\theta}_{\mathrm{PO}}$ is the performative optimum, defined as + +$$ +\boldsymbol {\theta} _ {\mathrm {P O}} \in \arg \min _ {\boldsymbol {\theta} \in \boldsymbol {\Theta}} \quad \mathbb {E} _ {Z \sim \mathcal {D} (\boldsymbol {\theta})} \ell (\boldsymbol {\theta}; Z) \quad \text {s . t .} \quad \mathbf {g} (\boldsymbol {\theta}) \preceq \mathbf {0}. +$$ + +Performative regret measures the suboptimality of the chosen decisions relative to the performative optima. Another performance metric for problem (1) on evaluating the decision sequence $\{\pmb{\theta}_t\}_{t=1}^T$ is + +constraint violation, given by + +$$ +\operatorname {V i o} _ {i} (T) := \sum_ {t = 1} ^ {T} \mathbb {E} \left[ g _ {i} \left(\boldsymbol {\theta} _ {t}\right) \right], \forall i \in [ m ], +$$ + +where we use the symbol $[m]$ to represent the integer set $\{1, \dots, m\}$ throughout this paper. + +Applications pertaining to problem (1) are ubiquitous. Next is an example. + +Example 1 (Multi-Asset Portfolio). Consider a scenario where an investor wants to allocate his/her investment across a set of $l$ assets, such as stocks, bonds, and commodities. The objective is to maximize the expected return subject to certain constraints, including liquidity, diversity, and risk tolerance. Let $z_{i}$ denote the rate of return of the $i$ th asset and $\theta_{i}$ denote its weight of allocation, $\forall i \in [l]$ . The investment can affect the future rates of return of the assets and, consequently, the overall expected return of the portfolio. For example, excessive investment in a particular asset may lead to a decline in the rate of return of other assets. Let $\mathbf{z} = [z_{1},\dots ,z_{l}]^{\top}$ and $\pmb {\theta} = [\theta_1,\dots ,\theta_l]^{\top}$ . Then, the expected return of the portfolio is $\mathbb{E}[r_p] \coloneqq \mathbb{E}_{\mathbf{z}\sim \mathcal{D}(\pmb {\theta})}\mathbf{z}^{\top}\pmb{\theta}$ . Typically, the risk of the portfolio is measured by the variance of its returns, given by $\pmb{\theta}^{\mathrm{T}}\pmb{\Psi}\pmb{\theta}$ , where $\Psi$ is the covariance matrix of $\mathbf{z}$ . One common approach to model liquidity is using the bid-ask spread, which measures the gap between the highest price a buyer is willing to pay (the bid) and the lowest price a seller is willing to accept (the ask) for a particular asset. Denote the vector of the bid-ask spread of the $l$ assets by $\mathbf{s} = [s_1,\dots ,s_l]^{\top}$ . Then, a liquidity constraint on the portfolio can be defined as $\mathbf{s}^{\top}\pmb{\theta} \leq S$ , where $S$ is the maximum allowable bid-ask spread. The multi-asset portfolio problem can be formulated as: + +$$ +\min _ {\boldsymbol {\theta}} - \mathbb {E} _ {\mathbf {z} \sim \mathcal {D} (\boldsymbol {\theta})} \mathbf {z} ^ {\top} \boldsymbol {\theta} \quad \text {s . t .} \sum_ {i = 1} ^ {l} \theta_ {i} \leq 1, \mathbf {0} \preceq \boldsymbol {\theta} \preceq \epsilon \cdot \mathbf {1}, \mathbf {s} ^ {\top} \boldsymbol {\theta} \leq S, a n d \boldsymbol {\theta} ^ {\mathrm {T}} \boldsymbol {\Psi} \boldsymbol {\theta} \leq \rho , +$$ + +where $\epsilon$ restricts the maximum amount of investment to one asset, and $\rho$ is the risk tolerance threshold. + +In this paper, our goal is to design an online algorithm that achieves both sublinear regret and sublinear constraint violations with respect to the time horizon $T$ , i.e., $\mathrm{Reg}(T) \leq o(T)$ and $\mathrm{Vio}_i(T) \leq o(T)$ , for all $i \in [m]$ . Then, the time-average regret satisfies $\mathrm{Reg}(T) / T \leq o(1)$ , and the time-average constraint violations satisfy $\mathrm{Vio}_i(T) / T \leq o(1)$ , for all $i \in [m]$ . Both asymptotically go to zero as $T$ goes to infinity. Therefore, the performance of the decision sequence $\{\pmb{\theta}_t\}_{t=1}^T$ generated by the algorithm approaches that of the performative optimum $\pmb{\theta}_{\mathrm{PO}}$ as $T$ goes to infinity. + +# 3 Adaptive Primal-Dual Algorithm + +# 3.1 Robust Primal-Dual Framework + +In this subsection, we develop a robust primal-dual framework for the performative prediction problem under inequality constraints. Our approach involves finding a saddle point for the regularized Lagrangian of problem (1). The Lagrangian, denoted by $\mathcal{L}(\theta ,\lambda)$ , is defined as + +$$ +\mathcal {L} (\boldsymbol {\theta}, \boldsymbol {\lambda}) := \operatorname {P R} (\boldsymbol {\theta}) + \boldsymbol {\lambda} ^ {\top} \mathbf {g} (\boldsymbol {\theta}) - \frac {\delta \eta}{2} \| \boldsymbol {\lambda} \| _ {2} ^ {2}, \tag {2} +$$ + +where $\theta$ is the primal variable (decision), $\lambda$ is the dual variable (multiplier), $\eta > 0$ is the stepsize of the algorithm, and $\delta > 0$ is a control parameter. In (2), we add the regularizer $-\frac{\delta\eta}{2}\|\lambda\|_2^2$ to suppress the growth of the multiplier $\lambda$ , so as to improve the stability of the algorithm. + +To find the saddle point of the Lagrangian $\mathcal{L}(\theta, \lambda)$ , we utilize alternating gradient update on the primal variable $\theta$ and the dual variable $\lambda$ . The gradients of $\mathcal{L}(\theta, \lambda)$ with respect to $\theta$ and $\lambda$ are respectively given by + +$$ +\nabla_ {\boldsymbol {\theta}} \mathcal {L} (\boldsymbol {\theta}, \boldsymbol {\lambda}) = \nabla_ {\boldsymbol {\theta}} \operatorname {P R} (\boldsymbol {\theta}) + \nabla_ {\boldsymbol {\theta}} \mathbf {g} (\boldsymbol {\theta}) ^ {\top} \boldsymbol {\lambda}, \tag {3} +$$ + +$$ +\nabla_ {\boldsymbol {\lambda}} \mathcal {L} (\boldsymbol {\theta}, \boldsymbol {\lambda}) = \mathbf {g} (\boldsymbol {\theta}) - \delta \eta \boldsymbol {\lambda}, +$$ + +where $\nabla_{\theta}\mathbf{g}(\pmb {\theta})$ is the Jacobian matrix of $\mathbf{g}(\cdot)$ . In (3), $\nabla_{\theta}\mathrm{PR}(\pmb {\theta})$ is the gradient of the performative risk $\mathrm{PR}(\pmb {\theta})$ , given by + +$$ +\nabla_ {\boldsymbol {\theta}} \Pr (\boldsymbol {\theta}) = \mathbb {E} _ {Z \sim \mathcal {D} (\boldsymbol {\theta})} \nabla_ {\boldsymbol {\theta}} \ell (\boldsymbol {\theta}; Z) + \mathbb {E} _ {Z \sim \mathcal {D} (\boldsymbol {\theta})} \ell (\boldsymbol {\theta}; Z) \nabla_ {\boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (Z), \tag {4} +$$ + +where $p_{\pmb{\theta}}(Z)$ is the density of $\mathcal{D}(\pmb{\theta})$ + +Since the data distribution $\mathcal{D}(\pmb{\theta})$ is unknown, the exact gradient of the performative risk $\mathrm{PR}(\pmb{\theta})$ is unavailable, posing a significant challenge to the algorithm design. In this paper, we tackle this + +issue using a robust primal-dual framework. The main idea is to construct gradient approximations from data and then perform alternating gradient updates based on the inexact gradients. Denote by $\nabla_{\theta}\widehat{\mathrm{PR}}_t(\pmb {\theta})$ the approximation of the gradient $\nabla_{\theta}\mathrm{PR}(\pmb {\theta})$ at the $t$ th iteration. Correspondingly, an approximation for the Lagrangian gradient $\nabla_{\theta}\mathcal{L}(\pmb {\theta},\pmb {\lambda})$ at the $t$ th iteration is given by + +$$ +\nabla_ {\boldsymbol {\theta}} \widehat {\mathcal {L}} _ {t} (\boldsymbol {\theta}, \boldsymbol {\lambda}) := \nabla_ {\boldsymbol {\theta}} \widehat {\operatorname {P R}} _ {t} (\boldsymbol {\theta}) + \nabla_ {\boldsymbol {\theta}} \mathbf {g} (\boldsymbol {\theta}) ^ {\top} \boldsymbol {\lambda}, \forall t \in [ T ]. +$$ + +The robust alternating gradient update is then performed as + +$$ +\boldsymbol {\theta} _ {t + 1} = \Pi_ {\boldsymbol {\Theta}} \left(\boldsymbol {\theta} _ {t} - \eta \nabla_ {\boldsymbol {\theta}} \widehat {\mathcal {L}} _ {t} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\lambda} _ {t}\right)\right), \tag {5} +$$ + +$$ +\boldsymbol {\lambda} _ {t + 1} = \left[ \boldsymbol {\lambda} _ {t} + \eta \nabla_ {\boldsymbol {\lambda}} \mathcal {L} _ {t} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\lambda} _ {t}\right) \right] ^ {+}. \tag {6} +$$ + +Then, the next question is how to construct effective gradient approximations that achieve satisfactory performance. + +By (4), the expectation over $\mathcal{D}(\theta)$ in the gradient $\nabla_{\theta}\mathrm{PR}(\theta)$ can be approximated by samples, while the unknown probability density $p_{\theta}(Z)$ presents the main challenge. Most existing research circumvented this problem by omitting the second term in $\nabla_{\theta}\mathrm{PR}(\theta)$ . This essentially gives a performative stable point. However, as pointed out in [Miller et al., 2021], performative stable points can be arbitrarily sub-optimal, leading to vacuous solutions. Instead, if we have further knowledge about the parametric structure of $p_{\theta}(Z)$ , the complexity of gradient approximation can be greatly reduced. In this regard, [Miller et al., 2021] and [Jagadeesan et al., 2022] exploited the linear structure of location family, and [Izzo et al., 2021] considered distribution maps within exponential family. Following [Miller et al., 2021; Jagadeesan et al., 2022], we focus on the family of location maps in this paper because it exhibits a favorable linear structure for algorithm development while maintaining broad generality to various applications. Next, we develop an adaptive algorithm for problem (1) with location family distribution maps based on the above robust primal-dual framework. + +# 3.2 Algorithm Design for Location family + +In the setting of location family, the distribution map depends on $\theta$ via a linear shift, i.e. + +$$ +Z \sim \mathcal {D} (\boldsymbol {\theta}) \Leftrightarrow Z \stackrel {d} {=} Z _ {0} + \mathbf {A} \boldsymbol {\theta}, \tag {7} +$$ + +where $Z_0 \sim \mathcal{D}_0$ is a base component representing the data without performativity, $\mathbf{A} \in \mathbb{R}^{k \times d}$ captures the performative effect of decisions, and $\frac{d}{\theta}$ means equal in distribution. Denote by $\pmb{\Sigma}$ the covariance matrix of the base distribution $\mathcal{D}_0$ . Note that $\mathcal{D}_0$ is still unknown. Plugging the distribution definition (7) into (4), we obtain a more explicit expression for $\nabla_{\theta} \mathrm{PR}(\pmb{\theta})$ as + +$$ +\nabla_ {\boldsymbol {\theta}} \operatorname {P R} (\boldsymbol {\theta}) = \mathbb {E} _ {Z _ {0} \sim \mathcal {D} _ {0}} \left[ \nabla_ {\boldsymbol {\theta}} \ell (\boldsymbol {\theta}; Z _ {0} + \mathbf {A} \boldsymbol {\theta}) + \mathbf {A} ^ {\top} \nabla_ {Z} \ell (\boldsymbol {\theta}; Z _ {0} + \mathbf {A} \boldsymbol {\theta}) \right]. +$$ + +To compute $\nabla_{\theta}\mathrm{PR}(\pmb {\theta})$ , we still need to address two problems: the unknown base distribution $\mathcal{D}_0$ and the unknown performative parameter A. We tackle them as follows. + +Offline Stochastic Approximation: We approximate the base distribution $\mathcal{D}_0$ offline by sample average approximation [Kleywegt et al., 2002]. Specifically, before the start of the alternating gradient update, we first draw $n$ samples $\{Z_{0,i}\}_{i = 1}^{n}$ from $\mathcal{D}(\mathbf{0})$ . These samples are used to approximate the expectation over $Z_{0}$ throughout the algorithm iteration. Hence, the sample complexity from this expectation approximation is fixed at $n$ . + +Online Parameter Estimation: We estimate the parameter $\mathbf{A}$ via online least squares. In each round of the alternating gradient update, we first take the current decision $\pmb{\theta}_t$ and its perturbed point $\pmb{\theta}_t + \mathbf{u}_t$ to observe samples $Z_{t} \sim \mathcal{D}(\pmb{\theta}_{t})$ and $Z_{t}^{\prime} \sim \mathcal{D}(\pmb{\theta}_{t} + \mathbf{u}_{t})$ , respectively, where $\mathbf{u}_t$ is an injected noise specified by the decision-maker. We have $\mathbb{E}[Z_t - Z_t'|\mathbf{u}_t] = \mathbf{A}\mathbf{u}_t$ . Then, the least-square problem at the $t$ th iteration is designed as + +$$ +\min _ {\mathbf {A}} \frac {1}{2} \left\| Z _ {t} ^ {\prime} - Z _ {t} - \mathbf {A u} _ {t} \right\| _ {2} ^ {2}. +$$ + +Let $\widehat{\mathbf{A}}_{t-1}$ be the estimate of $\mathbf{A}$ at the $(t-1)$ th iteration. Based on it, we construct a new estimate $\widehat{\mathbf{A}}_t$ for $\mathbf{A}$ by using gradient descent on the above least-square objective. This gives us the update + +$$ +\widehat {\mathbf {A}} _ {t} = \widehat {\mathbf {A}} _ {t - 1} + \zeta_ {t} \left(Z _ {t} ^ {\prime} - Z _ {t} - \widehat {\mathbf {A}} _ {t - 1} \mathbf {u} _ {t}\right) \mathbf {u} _ {t} ^ {\top}, +$$ + +# Algorithm 1 Adaptive Primal-Dual Algorithm + +1: Take decision $\pmb{\theta} = \mathbf{0}$ and observe $n$ samples $Z_{0,i}\sim \mathcal{D}_0,\forall i\in [n]$ +2: Initialize $\theta_{1} \in \Theta$ arbitrarily. Set $\lambda_{1} = 0$ and $\widehat{\mathbf{A}}_{0} = \mathbf{0}$ . +3: for $t = 1$ to $T$ do +4: Take decision $\pmb{\theta}_t$ and observe $Z_{t}\sim \mathcal{D}(\pmb{\theta}_{t})$ +5: Generate noise $\mathbf{u}_t$ +6: Take decision $\pmb{\theta}_t + \mathbf{u}_t$ and observe $Z_t^\prime \sim \mathcal{D}(\pmb {\theta}_t + \mathbf{u}_t)$ +7: Update parameter estimate by $\widehat{\mathbf{A}}_t = \widehat{\mathbf{A}}_{t - 1} + \zeta_t\left(Z_t' - Z_t - \widehat{\mathbf{A}}_{t - 1}\mathbf{u}_t\right)\mathbf{u}_t^\top$ +8: Update gradient approximation $\nabla_{\pmb{\theta}}\widehat{\mathrm{PR}}_t(\pmb{\theta}_t)$ by (8). +9: Compute $\nabla_{\pmb{\theta}}\widehat{\mathcal{L}}_t(\pmb {\theta}_t,\pmb {\lambda}_t) = \nabla_{\pmb{\theta}}\widehat{\mathrm{PR}}_t(\pmb {\theta}_t) + \nabla_{\pmb{\theta}}\mathbf{g}(\pmb {\theta}_t)^\top \pmb {\lambda}_t$ +0: Update the primal variable by $\pmb{\theta}_{t + 1} = \Pi_{\Theta}\left(\pmb{\theta}_t - \eta \nabla_\theta \widehat{\mathcal{L}}_t(\pmb{\theta}_t,\pmb{\lambda}_t)\right)$ +11: Compute $\nabla_{\pmb{\lambda}}\mathcal{L}(\pmb{\theta}_t,\pmb{\lambda}_t) = \mathbf{g}(\pmb{\theta}_t) - \delta \eta \pmb{\lambda}_t$ +12: Update the dual variable by $\pmb{\lambda}_{t + 1} = [\pmb{\lambda}_t + \eta \nabla_{\pmb{\lambda}}\mathcal{L}(\pmb{\theta}_t,\pmb{\lambda}_t)]^+$ . +13: end for + +where $\zeta_t$ is the stepsize of the online least squares at the $t$ th iteration. + +Adaptive Primal-Dual Algorithm: With the above preparation, we obtain an approximation for the gradient $\nabla_{\theta}\mathrm{PR}(\pmb{\theta}_t)$ at the $t$ th iteration as + +$$ +\nabla_ {\boldsymbol {\theta}} \widehat {\operatorname {P R}} _ {t} (\boldsymbol {\theta} _ {t}) := \frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \nabla_ {\boldsymbol {\theta}} \ell \left(\boldsymbol {\theta} _ {t}; Z _ {0, i} + \widehat {\mathbf {A}} _ {t} \boldsymbol {\theta} _ {t}\right) + \widehat {\mathbf {A}} _ {t} ^ {\top} \nabla_ {Z} \ell \left(\boldsymbol {\theta} _ {t}; Z _ {0, i} + \widehat {\mathbf {A}} _ {t} \boldsymbol {\theta} _ {t}\right) \right]. \tag {8} +$$ + +Given $\nabla_{\theta}\widehat{\mathrm{PR}}_t(\theta_t)$ , we develop an adaptive primal-dual algorithm for the constrained performative prediction problem (1) based on the robust primal-dual framework in § 3.1, which is presented in Algorithm 1. In Algorithm 1, the initial decision is randomly chosen from the admissible set $\Theta$ . Both the dual variable and the parameter estimate $\widehat{\mathbf{A}}_0$ are initialized to be zero. The algorithm maintains two sequences. One is the estimate $\widehat{\mathbf{A}}_t$ , which is updated based on the newly observed samples $Z_t$ and $Z_t'$ , as given in Step 7. The other is the alternating gradient update on the primal and dual variables, which are respectively given in Step 10 and Step 12. + +Remark 1. While this paper considers the distribution maps within the location family, we emphasize that the proposed robust primal-dual framework does not restrict to any form of distribution. For instance, the exponential family considered in [Izzo et al., 2021] with their gradient approximation method can be directly applied to our robust primal-dual framework. + +# 4 Convergence Analysis + +In this section, we analyze the convergence performance of the proposed adaptive primal-dual algorithm. We first provide the convergence result of the robust primal-dual framework. Then, we bound the error of gradient approximation in our adaptive algorithm for the location family. With these results, the convergence bounds of the adaptive primal-dual algorithm are derived. Our analysis is based on the following assumptions. + +Assumption 1 (Properties of $\ell(\theta; Z)$ ). The loss function $\ell(\theta; Z)$ is $\beta$ -smooth, $L_{\theta}$ -Lipschitz continuous in $\theta$ , $L_{Z}$ -Lipschitz continuous in $Z$ , $\gamma_{\theta}$ -strongly convex in $\theta$ , and $\gamma_{Z}$ -strongly convex in $Z$ . Moreover, we have $\gamma_{\theta} - \beta^2 / \gamma_Z > 0$ . + +Assumption 2 (Compactness and Boundedness of $\Theta$ ). The set of admissible decisions $\Theta$ is closed, convex, and bounded, i.e., there exists a constant $R > 0$ such that $\| \pmb{\theta} \|_2 \leq R$ , $\forall \pmb{\theta} \in \Theta$ . + +Assumption 3 (Properties of $\mathbf{g}(\pmb{\theta})$ ). The constraint function $\mathbf{g}(\pmb{\theta})$ is convex, $L_{\mathbf{g}}$ -Lipschitz continuous, and bounded, i.e., there exists a constant $C$ such that $\| \mathbf{g}(\pmb{\theta}) \|_2 \leq C, \forall \pmb{\theta} \in \Theta$ . + +Assumption 4 (Bounded Stochastic Gradient Variance). For any $i \in [n]$ and $\theta \in \Theta$ , there exists $\sigma \geq 0$ such that + +$$ +\mathbb {E} _ {Z _ {0, i} \sim \mathcal {D} _ {0}} \left\| \nabla_ {\boldsymbol {\theta}} \ell (\boldsymbol {\theta}; Z _ {0, i} + \mathbf {A} \boldsymbol {\theta}) + \mathbf {A} ^ {\top} \nabla_ {Z} \ell (\boldsymbol {\theta}; Z _ {0, i} + \mathbf {A} \boldsymbol {\theta}) - \nabla_ {\boldsymbol {\theta}} \operatorname * {P R} (\boldsymbol {\theta}) \right\| _ {2} ^ {2} \leq \sigma^ {2}. +$$ + +Assumption 1 is standard in the literature of performative prediction. Assumptions 2 and 3 are widely used in the analysis of constrained optimization problems [Tan et al., 2018; Yan et al., 2019; Cao and Basar, 2020], even with perfect knowledge of objectives. Assumption 4 bounds the variance of the stochastic gradient of $\mathrm{PR}(\pmb{\theta})$ . Additionally, to ensure a sufficient exploration of the parameter space, we make the following assumption on the injected noises $\{\mathbf{u}_t\}_{t=1}^T$ . + +Assumption 5 (Injected Noise). The injected noises $\{\mathbf{u}_t\}_{t=1}^T$ are independent and identically distributed. Moreover, there exist positive constants $\kappa_1, \kappa_2, \kappa_3$ such that for any $t \in [T]$ , the random noise $\mathbf{u}_t$ satisfies + +$$ +\mathbf {0} \prec \kappa_ {1} \cdot \mathbf {I} \preceq \mathbb {E} \left[ \mathbf {u} _ {t} \mathbf {u} _ {t} ^ {\top} \right], \quad \mathbb {E} \left\| \mathbf {u} _ {t} \right\| _ {2} ^ {2} \leq \kappa_ {2}, \quad a n d \quad \mathbb {E} \left[ \left\| \mathbf {u} _ {t} \right\| _ {2} ^ {2} \mathbf {u} _ {t} \mathbf {u} _ {t} ^ {\top} \right] \preceq \kappa_ {3} \mathbb {E} \left[ \mathbf {u} _ {t} \mathbf {u} _ {t} ^ {\top} \right]. +$$ + +Consider a Gaussian noise that $\mathbf{u}_t\sim \mathcal{N}(0,\mathbf{I}),\forall t\in [T]$ , we have $\kappa_{1} = 1$ , $\kappa_{2} = d$ , and $\kappa_{3} = 3d$ + +With the above assumptions, we provide some supporting lemmas below. First, we show $\varepsilon$ -sensitivity of the location family given in (7). + +Lemma 1 (ε-Sensitivity of $\mathcal{D}(\pmb{\theta})$ ). Define $\sigma_{\max}(\mathbf{A}) \coloneqq \max_{\|\pmb{\theta}\|_2 = 1} \|\mathbf{A}\pmb{\theta}\|_2$ . The location family given in (7) is ε-sensitive with parameter $\varepsilon \leq \sigma_{\max}(\mathbf{A})$ . That is, for any $\pmb{\theta}, \pmb{\theta}' \in \Theta$ , we have $\mathcal{W}_1(\mathcal{D}(\pmb{\theta}), \mathcal{D}(\pmb{\theta}')') \leq \varepsilon \|\pmb{\theta} - \pmb{\theta}'\|_2$ , where $\mathcal{W}_1(\mathcal{D}, \mathcal{D}')$ denotes the Wasserstein-1 distance. + +See § A of the supplementary file for the proof. Building upon Lemma 1, we have the following Lemma 2 about the performative risk $\mathrm{PR}(\theta)$ . + +Lemma 2 (Lipschitz Continuity and Convexity of PR(θ)). Consider the location family given in (7). With Assumption 1 and Lemma 1, we have that: 1) the performative risk PR(θ) is L-Lipschitz continuous for $L \leq L_{\theta} + L_{Z}\sigma_{\max}(\mathbf{A})$ ; 2) the performative risk PR(θ) is $\gamma$ -strongly convex for + +$$ +\gamma \geq \max \left\{\gamma_ {\pmb {\theta}} - \beta^ {2} / \gamma_ {Z}, \gamma_ {\pmb {\theta}} - 2 \varepsilon \beta + \gamma_ {Z} \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {A}) \right\}, +$$ + +where $\sigma_{\mathrm{min}}(\mathbf{A})\coloneqq \min_{\| \pmb {\theta}\| _2 = 1}\| \mathbf{A}\pmb {\theta}\| _2$ + +See § B of the supplementary file for the proof. Based on the Lipschitz continuity and convexity of $\mathrm{PR}(\pmb{\theta})$ , we provide the convergence result of the robust primal-dual framework below. + +Lemma 3 (Convergence Result of Robust Primal-Dual Framework). Set $\eta = \frac{1}{\sqrt{T}}$ . Then, there exists a constant $\delta \in \left[\frac{1 - \sqrt{1 - 32\eta^2L_{\mathbf{g}}^2}}{4\eta^2},\frac{1 + \sqrt{1 - 32\eta^2L_{\mathbf{g}}^2}}{4\eta^2}\right]$ such that under Assumptions 1-3, for $T \geq 32L_{\mathbf{g}}^2$ , the regret satisfies: + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \left(\mathbb {E} [ \mathrm {P R} (\pmb {\theta} _ {t}) ] - \mathrm {P R} (\pmb {\theta} _ {\mathrm {P O}})\right) \leq \frac {\gamma \sqrt {T}}{\gamma - a} \left(2 R ^ {2} + C ^ {2} + 2 L ^ {2}\right) \\ + \frac {\gamma}{\gamma - a} \left(\frac {1}{2 a} + \frac {1}{\sqrt {T}}\right) \sum_ {t = 1} ^ {T} \mathbb {E} \left\| \nabla_ {\pmb {\theta}} \widehat {\mathrm {P R}} _ {t} (\pmb {\theta} _ {t}) - \nabla_ {\pmb {\theta}} \mathrm {P R} (\pmb {\theta} _ {t}) \right\| _ {2} ^ {2}, \\ \end{array} +$$ + +where $a \in (0, \gamma)$ is a constant. Further, for any $i \in [m]$ , the constraint violation satisfies: + +$$ +\begin{array}{l} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} g _ {i} (\boldsymbol {\theta} _ {t}) \right] \leq \sqrt {1 + \delta} (2 R + \sqrt {2} C + 2 L) \sqrt {T} \\ + \sqrt {1 + \delta} \left(\frac {T ^ {\frac {1}{4}}}{\sqrt {a}} + \sqrt {2}\right) \left(\sum_ {t = 1} ^ {T} \mathbb {E} \left\| \nabla_ {\pmb {\theta}} \widehat {\mathrm {P R}} _ {t} (\pmb {\theta} _ {t}) - \nabla_ {\pmb {\theta}} \mathrm {P R} (\pmb {\theta} _ {t}) \right\| _ {2} ^ {2}\right) ^ {\frac {1}{2}}. \\ \end{array} +$$ + +Remark 2. Lemma 3 reveals the impact of gradient approximation error on the convergence performance of the robust primal-dual framework. By Lemma 3, if the accumulated gradient approximation error is less than $\mathcal{O}(\sqrt{T})$ , both the regret and the constraint violations are bounded by $\mathcal{O}(\sqrt{T})$ . Although stochastic primal-dual methods for constrained problems without performativity also use approximated (stochastic) gradients, they generally require unbiased gradient approximation [Tan et al., 2018; Yan et al., 2019; Cao and Basar, 2022]. This requirement, however, is difficult to satisfy in performative prediction since the unknown performative effect of decisions changes the data distribution. In contrast, the robust primal-dual framework does not restrict the approximate gradients to be unbiased and hence offers more flexibility to the design of gradient approximation. + +Proof of Lemma 3 is provided in § C of the supplementary file. In the next lemma, we bound the gradient approximation error of the adaptive primal-dual algorithm. + +Lemma 4 (Gradient Approximation Error). Set $\zeta_t = \frac{2}{\kappa_1(t - 1) + 2\kappa_3}$ , $\forall t \in [T]$ . Then, under Assumptions 4 and 5, the accumulated gradient approximation error is upper bounded by: + +$$ +\sum_ {t = 1} ^ {T} \mathbb {E} \left\| \nabla_ {\boldsymbol {\theta}} \widehat {\mathrm {P R}} _ {t} (\boldsymbol {\theta} _ {t}) - \nabla_ {\boldsymbol {\theta}} \mathrm {P R} (\boldsymbol {\theta} _ {t}) \right\| _ {2} ^ {2} \leq \frac {2 T \sigma^ {2}}{n} + \frac {4}{n} \left(2 L _ {Z} ^ {2} + \beta^ {2} R ^ {2} \left(1 + 2 \sigma_ {\max } (\mathbf {A})\right)\right) \overline {{\alpha}} \ln (T), +$$ + +where $\overline{\alpha} := \max \left\{\frac{2\kappa_3}{\kappa_1}\|\widehat{\mathbf{A}}_0 - \mathbf{A}\|_{\mathrm{F}}^2, \frac{8\kappa_2\operatorname{tr}(\boldsymbol{\Sigma})}{\kappa_1^2}\right\}$ . In Algorithm 1, we set $\widehat{\mathbf{A}}_0 = \mathbf{0}$ , and thus we have $\overline{\alpha} = \max \left\{\frac{2\kappa_3}{\kappa_1}\|\mathbf{A}\|_{\mathrm{F}}^2, \frac{8\kappa_2\operatorname{tr}(\boldsymbol{\Sigma})}{\kappa_1^2}\right\}$ . + +Remark 3. Lemma 4 demonstrates that the gradient approximation error of the adaptive primal-dual algorithm is upper bounded by $\mathcal{O}(T / n + \ln (T))$ . If we set the number of initial samples $n\geq \sqrt{T}$ we have $\sum_{t = 1}^{T}\mathbb{E}\left\| \nabla_{\pmb{\theta}}\widehat{\mathrm{PR}} (\pmb{\theta}_t) - \nabla_{\pmb{\theta}}\widehat{\mathrm{PR}}_t(\pmb {\theta}_t)\right\| _2^2\leq \mathcal{O}(\sqrt{T})$ . According to Lemma 3, this suffices to make the regret and constraint violation bounds to be $\mathcal{O}(\sqrt{T})$ + +Proof of Lemma 4 is presented in § D of the supplementary file. Combining Lemma 3 and Lemma 4 yields the regret and constraint violations of Algorithm 1, which is elaborated in Theorem 1 below. + +Theorem 1. Set $\eta = \frac{1}{\sqrt{T}}$ and $\zeta_t = \frac{2}{\kappa_1(t - 1) + 2\kappa_3}$ , $\forall t \in [T]$ . Then, there exists a constant $\delta \in \left[\frac{1 - \sqrt{1 - 32\eta^2L_{\mathbf{g}}^2}}{4\eta^2}, \frac{1 + \sqrt{1 - 32\eta^2L_{\mathbf{g}}^2}}{4\eta^2}\right]$ such that under Assumptions 1-5, for $T \geq 32L_{\mathbf{g}}^2$ , the regret of Algorithm 1 is upper bounded by: + +$$ +\begin{array}{l} \sum_ {t = 1} ^ {T} \left(\mathbb {E} [ \mathrm {P R} (\pmb {\theta} _ {t}) ] - \mathrm {P R} (\pmb {\theta} _ {\mathrm {P O}})\right) \leq \frac {\gamma \sqrt {T}}{\gamma - a} \left(2 R ^ {2} + C ^ {2} + 2 L ^ {2}\right) + \frac {\gamma \sigma^ {2}}{\gamma - a} \left(\frac {1}{a} + \frac {2}{\sqrt {T}}\right) \frac {T}{n} \\ + \frac {\gamma \bar {\alpha} \ln (T)}{n (\gamma - a)} \left(\frac {2}{a} + \frac {4}{\sqrt {T}}\right) \left(2 L _ {Z} ^ {2} + \beta^ {2} R ^ {2} \left(1 + 2 \sigma_ {\max } (\mathbf {A})\right)\right), \\ \end{array} +$$ + +Further, for any $i \in [m]$ , the constraint violation is upper bounded by: + +$$ +\begin{array}{l} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} g _ {i} (\pmb {\theta} _ {t}) \right] \leq \sqrt {1 + \delta} \left[ (2 R + \sqrt {2} C + 2 L) \sqrt {T} + \left(\frac {\sqrt {2}}{\sqrt {a}} + \frac {2}{T ^ {\frac {1}{4}}}\right) \frac {\sigma T ^ {\frac {3}{4}}}{\sqrt {n}} \right] \\ + \frac {2 \sqrt {\overline {{\alpha}} (1 + \delta) \ln (T)}}{\sqrt {n}} \left(\frac {T ^ {\frac {1}{4}}}{\sqrt {a}} + \sqrt {2}\right) \left(2 L _ {Z} ^ {2} + \beta^ {2} R ^ {2} \left(1 + 2 \sigma_ {\max} (\mathbf {A})\right)\right) ^ {\frac {1}{2}}. \\ \end{array} +$$ + +Remark 4. Theorem 1 demonstrates that Algorithm 1 achieves $\mathcal{O}(\sqrt{T} + T / n)$ regret and $\mathcal{O}(\sqrt{T} + T^{\frac{3}{4}} / \sqrt{n})$ constraint violations. By setting $n = \sqrt{T}$ , we have $T / n = \sqrt{T}$ and $T^{\frac{3}{4}} / \sqrt{n} = \sqrt{T}$ , and hence both the regret and constraint violations are upper bounded by $\mathcal{O}(\sqrt{T})$ . This indicates that Algorithm 1 attains the same order of performance as the stochastic primal-dual algorithm without performativity [Tan et al., 2018; Yan et al., 2019]. + +Remark 5. Throughout the time horizon $T$ , Algorithm 1 requires a total of $\sqrt{T} + 2T$ samples. Among them, $\sqrt{T}$ samples are dedicated to approximate the expectation over the base component $Z_0$ . Furthermore, each iteration requires an additional 2 samples to construct the online least-square objective, accumulating the remaining $2T$ samples. + +# 5 Numerical Experiments + +This section verifies the efficacy of our algorithm and theoretical results by conducting numerical experiments on two examples: multi-task linear regression and multi-asset portfolio. + +We first consider a multi-task linear regression problem in an undirected graph $\mathcal{G} \coloneqq (\mathcal{V}, \mathcal{E})$ , where $\mathcal{V}$ represents the node set and $\mathcal{E}$ represents the edge set. Each node $i$ handles a linear regression task $\mathrm{PR}_i(\pmb{\theta}_i) \coloneqq \mathbb{E}_{(\mathbf{x}_i, y_i) \sim \mathcal{D}_i(\pmb{\theta}_i)} \ell_i(\pmb{\theta}_i; (\mathbf{x}_i, y_i))$ , where $\pmb{\theta}_i$ is the parameter vector and $(\mathbf{x}_i, y_i)$ is a feature-label pair. The loss function of each task is $\ell_i(\pmb{\theta}_i; (\mathbf{x}_i, y_i)) = \frac{1}{2} (y_i - \pmb{\theta}_i^\top \mathbf{x}_i)^2, \forall i \in \mathcal{V}$ . The parameters of each connected node pair are subject to a proximity constraint $\| \pmb{\theta}_i - \pmb{\theta}_j \|_2^2 \leq b_{ij}^2$ , $\forall (i, j) \in \mathcal{E}$ . The entire network aims to solve the following problem: + +$$ +\min _ {\pmb {\theta} _ {i}, \forall i} \frac {1}{2} \sum_ {i \in \mathcal {V}} \mathbb {E} _ {(\mathbf {x} _ {i}, y _ {i}) \sim \mathcal {D} _ {i} (\pmb {\theta} _ {i})} (y _ {i} - \pmb {\theta} _ {i} ^ {\top} \mathbf {x} _ {i}) ^ {2} \quad \mathrm {s . t .} \quad \frac {1}{2} \| \pmb {\theta} _ {i} - \pmb {\theta} _ {j} \| _ {2} ^ {2} \leq b _ {i j} ^ {2}, \forall (i, j) \in \mathcal {E}. +$$ + +![](images/52b200f562840486bd83172e25f8fbf99c5a2839734b91e6fd3cd1462584d123.jpg) +Figure 1: Multi-task linear regression. (a) $\frac{\mathrm{Reg}(t)}{t\cdot\mathrm{Reg}(1)}$ ; (b) $\frac{\mathrm{Vio}_i(t)}{t\cdot|\mathrm{Vio}_i(1)|}$ ; (c) $\|\pmb{\theta}_t - \pmb{\theta}_{\mathrm{PO}}\|_2^2$ ; (d) $\|\widehat{\mathbf{A}}_t - \mathbf{A}\|_{\mathrm{F}}^2$ . + +![](images/718d5e742011a6a46072cf528444b298e40a888b3eb2c2f7681f739127ef7b9a.jpg) + +![](images/b8e9df4637894c5d060a4d61a35bceb4436bf2916d5142efb6712fb01099188e.jpg) + +![](images/cf670a4dfe753577d12a88786e62ca47cfa766b727165dc5929728459e2ac8a8.jpg) + +![](images/764fc897691a593e98641d58412a804941b0ce4ca4fe3f60a724821622913bbc.jpg) +Figure 2: Multi-asset portfolio. (a) $\frac{\mathrm{Reg}(t)}{t\cdot\mathrm{Reg}(1)}$ ; (b) $\frac{\mathrm{Vio}_i(t)}{t\cdot|\mathrm{Vio}_i(1)|}$ ; (c) $\|\pmb{\theta}_t - \pmb{\theta}_{\mathrm{PO}}\|_2^2$ ; (d) $\|\widehat{\mathbf{A}}_t - \mathbf{A}\|_{\mathrm{F}}^2$ . + +![](images/397abe00f04e8be926ebb90f291581a3c3df313a89ce19c1198d5491e7ef51c3.jpg) + +![](images/653078a3139735dffee002c8dd57ed0587754b89080bcb1f60c9c5c39f83e3fa.jpg) + +![](images/f32ada0f16f4d2a2075857560bf1769f011aea0bf866d1e5409fea5450b0c1f2.jpg) + +The second example considers the multi-asset portfolio described in Example 1. The simulation details are provided in $\S F$ of the supplementary file. + +We compare the proposed adaptive primal-dual algorithm (abbreviated as APDA) with two approaches. The first approach is "PD-PS", which stands for the primal-dual (PD) algorithm used to find the performative stable (PS) points. The algorithm PD-PS is similar to APDA, but it uses only the first term in Eq. (8) as the approximate gradient. The second approach is "baseline", which runs the same procedures as APDA with perfect knowledge of $\mathbf{A}$ , i.e., the performative effect is known. We consider four performance metrics: (a) relative time-average regret $\frac{\mathrm{Reg}(t)}{t\cdot\mathrm{Reg}(1)}$ , (b) relative time-average constraint violation $\frac{\mathrm{Vio}_i(t)}{t\cdot|\mathrm{Vio}_i(1)|}$ , (c) decision deviation $\| \theta_t - \theta_{\mathrm{PO}}\|_2^2$ , and (d) parameter estimation error $\| \widehat{\mathbf{A}}_t - \mathbf{A}\|_{\mathrm{F}}^2$ . + +Fig. 1 and Fig. 2 show the numerical results of the multi-task linear regression and the multi-asset portfolio, respectively. In both figures, we consider two settings for the sensitivity parameter of $\mathcal{D}(\theta)$ , namely $\varepsilon = 1$ and $\varepsilon = 10$ . The results of these two figures are qualitatively analogous. First, we observe that APDA outperforms PD-PS significantly that both the relative time-average regret and the decision derivation of the former achieve an accuracy around or up to $10^{-3}$ for the setting of $T = 10^{6}$ , while these of the latter have worse performance for $\varepsilon = 1$ and converge to constants for $\varepsilon = 10$ . The relative time-average constraint of all cases converges to zero or negative numbers. This corroborates the sublinearity of the regret and the constraint violations of APDA, as shown in Theorem 1. More importantly, this result implies that the larger the sensitivity parameter $\varepsilon$ , the stronger the performative power is and, consequently, the worse PD-PS performs. In contrast, by tracking the performative gradient, APDA adapts to the unknown performative effect and performs well constantly. Moreover, both subfigures (d) show that the error of parameter estimates decreases sublinearly with iterations, validating the effectiveness of the online parameter estimation. Last but not least, the performance of APDA is close to the performance of the baseline, which manifests the effectiveness of our proposed APDA algorithm. + +# 6 Conclusions + +This paper has studied the performative prediction problem under inequality constraints, where the agnostic performative effect of decisions changes future data distributions. To find the performative optima for the problem, we have developed a robust primal-dual framework that admits inexact gradients up to an accuracy of $\mathcal{O}(\sqrt{T})$ , yet delivers the same $\mathcal{O}(\sqrt{T})$ regret and constraint violations as the stochastic primal-dual algorithm without performativity. Then, based on this framework, we have proposed an adaptive primal-dual algorithm for location family with effective gradient approximation method that meets the desired accuracy using only $\sqrt{T} + 2T$ samples. Numerical experiments have validated the effectiveness of our algorithm and theoretical results. + +# Acknowledgments and Disclosure of Funding + +The work was supported by the National Natural Science Foundation of China Grant 62203373. + +# References + +Hussein A Abdou and John Pointon. 2011. Credit scoring, statistical techniques and evaluation criteria: a review of the literature. Intelligent systems in accounting, finance and management 18, 2-3 (2011), 59-88. +Gavin Brown, Shlomi Hod, and Iden Kalemaj. 2022. Performative prediction in a stateful world. In International Conference on Artificial Intelligence and Statistics. PMLR, 6045-6061. +Xuanyu Cao and Tamer Başar. 2020. Decentralized multi-agent stochastic optimization with pairwise constraints and quantized communications. IEEE Transactions on Signal Processing 68 (2020), 3296-3311. +Xuanyu Cao and Tamer Bāsar. 2022. Distributed constrained online convex optimization over multiple access fading channels. IEEE Transactions on Signal Processing 70 (2022), 3468-3483. +Fabrizio Detassis, Michele Lombardi, and Michela Milano. 2021. Teaching the old dog new tricks: Supervised learning with constraints. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 3742-3749. +Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu. 2018. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation. 55-70. +Dmitriy Drusvyatskiy and Lin Xiao. 2022. Stochastic optimization with decision-dependent distributions. Mathematics of Operations Research (2022). +Hans Föllmer and Alexander Schied. 2002. Convex measures of risk and trading constraints. Finance and stochastics 6 (2002), 429-447. +Francisco García-Sánchez, Ricardo Colomo-Palacios, and Rafael Valencia-García. 2020. A social-semantic recommender system for advertisements. Information Processing & Management 57, 2 (2020), 102153. +Michael Grant and Stephen Boyd. 2014. CVX: Matlab software for disciplined convex programming, version 2.1. +Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science. 111-122. +Daniel P Heyman and Matthew J Sobel. 2004. Stochastic models in operations research: stochastic optimization. Vol. 2. Courier Corporation. +Zachary Izzo, Lexing Ying, and James Zou. 2021. How to learn when data reacts to your model: performative gradient descent. In International Conference on Machine Learning. PMLR, 4641-4650. +Meena Jagadeesan, Tijana Zrnic, and Celestine Mendler-Dünner. 2022. Regret minimization with performative feedback. In International Conference on Machine Learning. PMLR, 9760-9785. +Belhal Karimi, Blazej Miasojedow, Eric Moulines, and Hoi-To Wai. 2019. Non-asymptotic analysis of biased stochastic approximation scheme. In Conference on Learning Theory. PMLR, 1944-1974. +Susie Khamis. 2020. Branding diversity: New advertising and cultural strategies. Routledge. + +Anton J Kleywegt, Alexander Shapiro, and Tito Homem-de Mello. 2002. The sample average approximation method for stochastic discrete optimization. SIAM Journal on optimization 12, 2 (2002), 479-502. +Qiang Li and Hoi-To Wai. 2022. State dependent performative prediction with stochastic approximation. In International Conference on Artificial Intelligence and Statistics. PMLR, 3164-3186. +Qiang Li, Chung-Yiu Yau, and Hoi To Wai. 2022. Multi-agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents. In Advances in Neural Information Processing Systems. +Debmalya Mandal, Stelios Triantafyllou, and Goran Radanovic. 2023. Performative reinforcement learning. In International Conference on Machine Learning. PMLR, 23642-23680. +Celestine Mendler-Dünnner, Juan Perdomo, Tijana Zrnic, and Moritz Hardt. 2020. Stochastic optimization for performative prediction. Advances in Neural Information Processing Systems 33 (2020), 4929-4939. +David Metz. 2021. Time constraints and travel behaviour. *Transportation planning and technology* 44, 1 (2021), 16-29. +John P Miller, Juan C Perdomo, and Tijana Zrnic. 2021. Outside the echo chamber: Optimizing the performative risk. In International Conference on Machine Learning. PMLR, 7710-7720. +Usue Mori, Alexander Mendiburu, Maite Álvarez, and Jose A Lozano. 2015. A review of travel time estimation and forecasting for advanced traveller information systems. *Transportmetrica A: Transport Science* 11, 2 (2015), 119-157. +Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, and Lillian Ratliff. 2022. Learning in Stochastic Monotone Games with Decision-Dependent Data. In International Conference on Artificial Intelligence and Statistics. PMLR, 5891-5912. +Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. 2020. Performative prediction. In International Conference on Machine Learning. PMLR, 7599-7609. +Georgios Piliouras and Fang-Yi Yu. 2022. Multi-agent performative prediction: From global stability and optimality to chaos. arXiv preprint arXiv:2201.10483 (2022). +Warren B Powell. 2019. A unified framework for stochastic optimization. European Journal of Operational Research 275, 3 (2019), 795-821. +Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2008. Dataset shift in machine learning. Mit Press. +Mitas Ray, Lillian J Ratliff, Dmitriy Drusvyatskiy, and Maryam Fazel. 2022. Decision-dependent risk minimization in geometrically decaying dynamic environments. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8081-8088. +Luciano Serafini and Artur d'Avila Garcez. 2016. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422 (2016). +Conghui Tan, Tong Zhang, Shiqian Ma, and Ji Liu. 2018. Stochastic primal-dual method for empirical risk minimization with $\mathcal{O}(1)$ per-iteration complexity. Advances in Neural Information Processing Systems 31 (2018). +Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. 2019. Safe exploration for interactive machine learning. Advances in Neural Information Processing Systems 32 (2019). +Killian Wood, Gianluca Bianchin, and Emiliano Dall'Anese. 2021. Online projected gradient descent for stochastic optimization with decision-dependent distributions. IEEE Control Systems Letters 6 (2021), 1646-1651. +Killian Wood and Emiliano Dall'Anese. 2022a. Online Saddle Point Tracking with Decision-Dependent Data. arXiv preprint arXiv:2212.02693 (2022). +Killian Wood and Emiliano Dall'Anese. 2022b. Stochastic saddle point problems with decision-dependent distributions. arXiv preprint arXiv:2201.02313 (2022). +Yu Wu, Wei Wu, Can Xu, and Zhoujun Li. 2018. Knowledge enhanced hybrid neural network for text matching. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. +Yan Yan, Yi Xu, Qihang Lin, Lijun Zhang, and Tianbao Yang. 2019. Stochastic Primal-Dual Algorithms with Faster Convergence than $\mathcal{O}(1 / \sqrt{T})$ for Problems without Bilinear Structure. arXiv preprint arXiv:1904.10112 (2019). \ No newline at end of file diff --git a/zeroregretperformativepredictionunderinequalityconstraints/images.zip b/zeroregretperformativepredictionunderinequalityconstraints/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..496f2ef058e383b921a928f78ec0bb5fa87b2e29 --- /dev/null +++ b/zeroregretperformativepredictionunderinequalityconstraints/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:212969895e967a20d546d743574a8ea02cfb8bf8c7a3f6114c2df5cd38ea8817 +size 322602 diff --git a/zeroregretperformativepredictionunderinequalityconstraints/layout.json b/zeroregretperformativepredictionunderinequalityconstraints/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5b27bda0dfb786a03558f79ade3b2b52d42b396d --- /dev/null +++ b/zeroregretperformativepredictionunderinequalityconstraints/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4723f70b4e3aa59d65b799adcdee338da78c820e192ebe21a4c406f1125b64c2 +size 534714 diff --git a/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_content_list.json b/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9c5f397db10587191e9449c4713492a1e88a963c --- /dev/null +++ b/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9390975c19c491c9a972573a569b83ed88aaf27daa28adc50ac6620689a890c +size 189321 diff --git a/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_model.json b/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_model.json new file mode 100644 index 0000000000000000000000000000000000000000..58f1545ad01ece2481f45cc499d7395afacd35ed --- /dev/null +++ b/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d68f03c34291bc414072c8d3aa943f5af10e28bfe2c1986006ca367fc07e841b +size 223398 diff --git a/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_origin.pdf b/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..95e0d00bc57ac901574484579b5c786ac8d6b009 --- /dev/null +++ b/zeroshotanomalydetectionviabatchnormalization/dd160975-1bf8-4c76-acf1-9c4d6937ec70_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36609e126dd641d4708ee9510c81c0c63b300d3c4a9406353932a8690fb92cf1 +size 1949043 diff --git a/zeroshotanomalydetectionviabatchnormalization/full.md b/zeroshotanomalydetectionviabatchnormalization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3346a03d885d6418e157f4a10a3a201de36886a7 --- /dev/null +++ b/zeroshotanomalydetectionviabatchnormalization/full.md @@ -0,0 +1,649 @@ +# Zero-Shot Anomaly Detection via Batch Normalization + +Aodong Li* + +UC Irvine + +Chen Qiu* + +Bosch Center for AI + +Marius Kloft + +TU Kaiserslautern + +Padhraic Smyth + +UC Irvine + +Maja Rudolph† + +Bosch Center for AI + +Stephan Mandt† + +UC Irvine + +# Abstract + +Anomaly detection (AD) plays a crucial role in many safety-critical application domains. The challenge of adapting an anomaly detector to drift in the normal data distribution, especially when no training data is available for the "new normal", has led to the development of zero-shot AD techniques. In this paper, we propose a simple yet effective method called Adaptive Centered Representations (ACR) for zero-shot batch-level AD. Our approach trains off-the-shelf deep anomaly detectors (such as deep SVDD) to adapt to a set of inter-related training data distributions in combination with batch normalization, enabling automatic zero-shot generalization for unseen AD tasks. This simple recipe, batch normalization plus meta-training, is a highly effective and versatile tool. Our theoretical results guarantee the zero-shot generalization for unseen AD tasks; our empirical results demonstrate the first zero-shot AD results for tabular data and outperform existing methods in zero-shot anomaly detection and segmentation on image data from specialized domains. Code is at https://github.com/aodongli/zero-shot-ad-via-batch-norm + +# 1 Introduction + +Anomaly detection (AD)—the task of identifying data instances deviating from the norm [65]—plays a significant role in numerous application domains, such as fake review identification, bot detection in social networks, tumor recognition, and industrial fault detection. AD is particularly crucial in safety-critical applications where failing to recognize anomalies, for example, in a chemical plant or a self-driving car, can risk lives. + +Consider a medical setting where an anomaly detector encounters a batch of medical images from different patients. The medical images have been recorded with a new imaging technology different from the training data, or the patients are from a demographic the anomaly detector has not been trained on. Our goal is to develop an anomaly detector that can still process such data using batches, assigning low scores to normal images and high scores to anomalies (i.e., images that differ systematically) without retraining. To achieve this zero-shot adaptation, we exploit the fact that anomalies are rare. Given a new batch of test data, a zero-shot AD method [18, 36, 51, 71] has to detect which features are typical of the majority of normal samples and which features are atypical. + +We propose Adaptive Centered Representations (ACR), a lightweight zero-shot AD method that combines two simple ideas: batch normalization and meta-training. Assuming an overall majority of "normal" samples, a randomly-sampled batch will typically have more normal samples than anomalies. The effect of batch normalization is then to draw these normal samples closer to the center (in its recentering and scaling operation), while anomalies will end up further away from the center. Notably, this scaling and centering is robust to a distribution shift in the input, allowing a + +![](images/7b08dad27cac58db64b9652712bcdaab9be843945276a356f0332dfac82d3f1f.jpg) +Figure 1: a) Demonstrations of concrete examples of a meta-training set and a testing distribution. It is not necessary for the meta-training set to include the exact types of samples encountered during testing. For instance, when detecting lions within geese, the training data does not need to include lions or geese. b) Illustration of zero-shot batch-level AD with ACR using a one-class classifier [63]. The approach encounters three tasks $(P_{1:3}^{\pi}$ , Eq. (6)) during training (black arrows) and learns to map each task's majority of samples (i.e., the normal samples) to a shared learned center in embedding space. At test time (blue arrow), the learned model maps the normal (majority) samples to the same center and the distance from the center serves as AD score. + +![](images/204778775aae446aa4ceacc9fe4392a59f1c79a85ed397306ae7748b8cf6b354.jpg) + +self-supervised anomaly detector to generalize to distributions never encountered during training. We propose a meta-training scheme to unlock the power of batch normalization layers for zero-shot AD. During training, the anomaly detector will see many different anomaly detection tasks, mixed from different choices for normal and abnormal examples. Through this variability in the training tasks, the anomaly detector will learn to rely as much as possible on the batch normalization operations in its architecture. + +Advantages of ACR include that it is theoretically grounded, simple, domain-independent, and compatible with various backbone models commonly used in deep AD [56, 63]. Contrary to recent approaches based on foundation models [36], applicable only to images, ACR can be employed on data from any domain, such as time series, tabular data, or graphs. + +We begin by presenting our assumptions and method in Sec. 2. Next, with the main idea in mind, we describe the related work in Sec. 3. We demonstrate the effectiveness of our method with experiments in Sec. 4. Finally, we conclude our work and state the limitations and societal impacts. + +Our contributions can be summarized as follows: + +- An effective new method. Our results for the first time show that training off-the-shelf deep anomaly detectors on a meta-training set, using batch normalization layers, gives automatic zero-shot generalization for AD. For which we derive a generalization bound on anomaly scores. +- Zero-shot AD on tabular data. We provide the first empirical study of zero-shot AD on tabular data, where our adaptation approach retains high accuracy. +- Competitive results for images. Our results demonstrate not only a substantial improvement in zero-shot AD performance for non-natural images, including medical imaging but also establish a new state-of-the-art in anomaly segmentation on the MVtec AD benchmark [5]. + +# 2 Method + +We begin with the problem statement in Sec. 2.1 and then state the assumptions in Sec. 2.2. Finally we present our proposed solution in Sec. 2.3. The training procedure is outlined in Alg. 1 in Supp. C. + +# 2.1 Problem Statement and Method Overview + +We consider the problem of learning an anomaly detector that is required to immediately adapt (without any further training) when deployed in a new environment. The main idea is to use batch normalization as a mechanism for adaptive batch-level AD. For any batch of data containing mostly "normal" samples, each batch normalization shifts its inputs to the origin, thereby (1) enabling the discrimination between normal data and outliers/anomalies, and (2) bringing data from different distributions into a common frame of reference. (Notably, we propose applying batch norm in multiple layers for different anomaly scorers.) For the algorithm to generalize to unseen distributions, + +we train our model on multiple data sets of "normal" data simultaneously, making sure each training batch contains a majority of related data points (from the same distribution) at a time. + +Fig. 1a illustrates this idea, where all distributions are exemplified based on the example of homogeneous groups of animals (only dogs, only robins, etc.) The goal is to detect a lion among geese, where neither geese nor lions have been encountered before. Fig. 1b illustrates the scheme based on the popular example of deep support vector data description (DSVDD) [63], where samples are mapped to a pre-specified point in an embedding space and scored based on their distance to this point. All training distributions are mapped to the same point, as enabled through batch normalization. + +# 2.2 Notation and Assumptions + +To formalize the notion of a meta-training set, we consider a distribution of interrelated data distributions (previously referred to as groups) as commonly studied in meta-learning and zero-shot learning [3, 21, 22, 32]. This inter-relatedness can be expressed by assuming that $K$ training distributions $P_{1},\ldots ,P_{K}$ and a test distribution $P_{*}$ are sampled from a meta-distribution $\mathcal{Q}$ : + +$$ +P _ {1}, \dots , P _ {K}, P _ {*} \stackrel {{\text {i . i . d .}}} {{\sim}} \mathcal {Q}. \tag {1} +$$ + +We assume that the distributions in $\mathcal{Q}$ share some common structure, such that training a model on one distribution has the potential to aid in deploying the model on another distribution. For example, the data $\mathbf{x}$ could be radiology images from patients, and each $P_{j}$ or $P_{\star}$ could be a distribution of images from a specific hospital. These distributions share similarities but differ systematically because of differences in radiology equipment, calibration, and patient demographics2. Each of the distributions $P \in \mathcal{Q}$ defines a different anomaly detection task. For each task, we have to obtain an anomaly scoring function $S_{\theta}$ that assigns low scores to normal samples $\mathbf{x} \sim P$ and high scores to anomalies. + +We now consider a batch $\mathcal{B} \subset \mathcal{D}$ of size $B$ , taken from an underlying data set $\mathcal{D} \sim P$ of size $N$ . The batch can be characterized by indexing data points from $\mathcal{D}$ : + +$$ +\mathcal {B} \equiv \left(i _ {1}, \dots , i _ {B}\right) \sim \operatorname {U n i f} \left(\{1, \dots , N \}\right). \tag {2} +$$ + +We denote the anomaly scores on a batch level by defining a vector-valued anomaly score + +$$ +\mathbf {S} _ {\theta} \left(\mathbf {x} _ {\mathcal {B}}\right) = \left(S _ {\theta} ^ {i _ {1}} \left(\mathbf {x} _ {\mathcal {B}}\right), \dots , S _ {\theta} ^ {i _ {\mathcal {B}}} \left(\mathbf {x} _ {\mathcal {B}}\right)\right), \tag {3} +$$ + +indicating the anomaly score for every datum in a batch. By thresholding the anomaly scores $S_{\theta}^{i}(\mathbf{x}_{\mathcal{B}})$ , we obtain binary predictions of whether data point $\mathbf{x}_i$ is anomalous in the context of batch $\mathbf{x}_{\mathcal{B}}$ . + +By conditioning on a batch of samples, our approach obtains distributional information beyond a single sample. For example, an image of a cat may be normal in the context of a batch of cat images, but it may be anomalous in the context of a batch of otherwise dog images. This is different from current deep anomaly detection schemes that evaluate anomaly scores without referring to a context. + +Before presenting a learning scheme of how to combine batch-level information in conjunction with established anomaly detection approaches, we discuss the assumptions that our approach makes. (The empirical or theoretical justifications as well as possibilities of removing or mitigating the assumptions can be found in Supp. A.) + +A1 Availability of a meta-training set. As discussed above, we assume the availability of a set of interrelated distributions. The meta-set is used to learn a model that can adapt without re-training. + +A2 Batch-level anomaly detection. As mentioned above, we assume we perform batch-level predictions at test time, allowing us to detect anomalies based on reference data in the batch. + +A3 Majority of normal data. We assume that normal data points take the majority in every i.i.d. sampled test batch. + +Due to the absence of anomaly labels (or text descriptions) at test-time, we cannot infer the correct anomaly labels without assumptions A2 and A3. Together, they instruct us that given a batch of test examples, the majority of the samples in the batch constitute normal samples. + +# 2.3 Adaptively Centered Representations + +Batch Normalization as Adaptation Modules. An important component of our method is batch normalization, which shifts and re-scales any data batch $\mathbf{x}_{\mathcal{B}}$ to have a sample mean zero and variance one. Batch normalization also provides a naive parameter-free zero-shot batch-level anomaly detector: + +$$ +S _ {\mathrm {n a i v e}} ^ {i} \left(\mathbf {x} _ {\mathcal {B}}\right) = \left\| \left(\mathbf {x} _ {i} - \bar {\mu} _ {\mathbf {x} _ {\mathcal {B}}}\right) / \bar {\sigma} _ {\mathbf {x} _ {\mathcal {B}}} \right\| _ {2} ^ {2}, \tag {4} +$$ + +where $\bar{\mu}$ and $\bar{\sigma}^2$ are the coordinate-wise sample mean and sample variance. $\bar{\mu}$ is dominated by the majority of the batch, which by assumption A3, is the normal data. If the $\mathbf{x}_i$ lie in an informative feature space, anomalies will have a higher-than-usual distance to the mean, making the approach a simple, adaptive AD method, illustrated in Fig. 2 in Supp. D. + +While the example provides a proof of concept, in practice, the normal samples typically do not concentrate around their mean in the raw data space. Next, we integrate this idea into neural networks and develop an approach that learns adaptively centered representations for zero-shot AD. + +Deep Models with Batch Normalization Layers as Scalable Zero-shot Anomaly Detectors. In deep neural networks, the adaptation ability is obtained for free with batch normalization layers [35]. Batch normalization has become a standard and necessary component to facilitate optimization convergence in training neural networks. In common neural network architectures [27, 33, 60], batch normalization layers are used after each non-linear transformation layer, making a zero-shot adaptation with respect to its input batch. The entire neural network, stacking up many non-linear transformation and normalization layers, has powerful potential in scalable zero-shot adaptation and learning adaptation-needed feature representations for complex data forms. + +Training Objective. As discussed above, we can instantiate $S_{\theta}$ as a deep neural network with batch normalization layers and optimize the neural network weights $\theta$ . We first provide our objective function and then the rationality. Our approach is compatible with a wide range of deep anomaly detection objectives; therefore we consider a generic loss function $L[\mathbf{S}_{\theta}(\mathbf{x}_{\mathcal{B}})]$ that is a function of the anomaly score. For example, in many cases, the loss function to be minimized is the anomaly score itself (averaged over the batch). + +The availability of a meta-data set (A1) gives rise to the following minimization problem: + +$$ +\theta^ {*} = \underset {\theta} {\arg \min } \frac {1}{K} \sum_ {j = 1} ^ {K} \mathbb {E} _ {\mathbf {x} _ {\mathcal {B}} \sim P _ {j}} L \left[ \mathbf {S} _ {\theta} \left(\mathbf {x} _ {\mathcal {B}}\right) \right]. \tag {5} +$$ + +Typical choices for $L[\mathbf{S}_{\theta}(\mathbf{x}_{\mathcal{B}})]$ include DSVDD [64] and neural transformation learning (NTL) [56]. Details and modifications of this objective will follow. + +Why does it work? Batch normalization helps re-calibrate the data batches of different distributions into similar forms: normal data will center around the origin. Such calibration happens from granular features (lower layers) to high-level features (higher layers), resulting in powerful feature learning and adaptation ability3. We visualize the calibration in Fig. 3 in Supp. I.2. Therefore, optimizing Eq. (5) are able to learn a (locally) optimal $\mathbf{S}_{\theta^*}$ that is adaptive to all $K$ different training distributions. Such learned adaptation ability will be guaranteed to generalize to unseen related distributions $P_*$ . See Sec. 2.4 below and Supp. B for more details. + +Meta Outlier Exposure. While Eq. (5) can be a viable objective, we can significantly improve over it while avoiding trivial solutions4. The approach builds on treating samples from other distributions as anomalies during training. The idea is that the synthetic anomalies can be used to guide learning a tighter decision boundary around the normal data [30]. Drawing on the notation from Eq. 1, we thus simulate a mixture distribution by contaminating each $P_{j}$ by admixing a fraction $(1 - \pi) \ll 1$ of data from other available training distributions. The resulting corrupted distribution $P_{j}^{\pi}$ is thereby + +$$ +P _ {j} ^ {\pi} := \pi P _ {j} + (1 - \pi) \bar {P} _ {j}, \quad \bar {P} _ {j} := \frac {1}{K - 1} \sum_ {i \neq j} P _ {i} \tag {6} +$$ + +This notation captures the case where the training distribution is free of anomalies $(\pi = 1)$ . + +Next, we discuss constructing an additional loss for the admixed anomalies, whose identity is known at training time. As discussed in [30, 58], many anomaly scores $\mathbf{S}_{\theta}(\mathbf{x}_{\mathcal{B}})$ allow for easily constructing a score $\mathbf{A}_{\theta}(\mathbf{x}_{\mathcal{B}})$ that behaves inversely. That means, we expect $\mathbf{A}_{\theta}(\mathbf{x}_{\mathcal{B}})$ to be large when evaluated on normal samples, and small for anomalies. Importantly, both scores share the same parameters. In the context of DSVDD, we define $\mathbf{S}_{\theta}(\mathbf{x}_{\mathcal{B}}) = 1 / \mathbf{A}_{\theta}(\mathbf{x}_{\mathcal{B}})$ , but other definitions are possible for alternative losses [58, 63, 64]. Using the inverse score, we can construct a supervised AD loss on the meta training set as follows. + +We define a binary indicator variable $y_{j}^{i}$ , indicating whether data point $i$ is normal or anomalous in the context of distribution $P_{j}$ (i.e., $y_{j}^{i} = 0$ iff $\mathbf{x}_{\mathcal{B}}^{i} \in P_{j}$ ). We later refer to it as anomaly label. A natural choice for the loss in Eq. (5) is therefore + +$$ +L \left[ \mathbf {S} _ {\theta} \left(\mathbf {x} _ {\mathcal {B}}\right) \right] = \frac {1}{B} \sum_ {i \in \mathcal {B}} \left\{\left(1 - y ^ {i}\right) \mathbf {S} _ {\theta} ^ {i} \left(\mathbf {x} _ {\mathcal {B}}\right) + y ^ {i} \mathbf {A} _ {\theta} ^ {i} \left(\mathbf {x} _ {\mathcal {B}}\right) \right\}. \tag {7} +$$ + +The loss function resembles the outlier exposure loss [30], but as opposed to using synthetically generated samples (typically only available for images), we use samples from the complement $\bar{P}_j$ at training time to synthesize outliers. The training pseudo-code is in Alg. 1 of Supp. C. + +In addition to DSVDD, we also study backbone models such as binary classifiers and NTL [56]. For NTL, we adopt the $\mathbf{S}_{\theta}$ and $\mathbf{A}_{\theta}$ used by Qiu et al. [58]. For binary classifiers, we set $\mathbf{S}_{\theta}(\mathbf{x}) = -\log \left(1 - \sigma (f_{\theta}(\mathbf{x}))\right)$ and $\mathbf{A}_{\theta}(\mathbf{x}) = -\log \sigma (f_{\theta}(\mathbf{x}))$ + +Batch-level Prediction. After training, we deploy the model in an unseen production environment to detect anomalies in a zero-shot adaptive fashion. Similar to the training set, the distribution will be a mixture of new normal samples $P_{*}$ and an admixture of anomalies from a distribution never encountered before. For the method to work, we still assume that the majority of samples be normal (Assumption A3). Anomaly scores are assigned based on batches, as during training. For prediction, the anomaly scores are thresholded at a user-specified value. + +Time complexity for prediction depends on the network complexity and is constant $O(1)$ relative to batch size, because the predictions can be trivially parallelized via modern deep learning libraries. + +# 2.4 Theoretical Results + +Having described our method, we now establish a theoretical basis for ACR by deriving a bounded generalization error on an unseen test distribution $P_{*}$ . We define the generalization error in terms of training and testing losses, i.e., we are interested in whether the expected loss generalizes from the meta-training distributions $P_{1},\dots ,P_{K}$ to an unseen distribution $P_{*}$ . + +To prepare the notations, we split $S_{\theta}(\mathbf{x})$ into two parts: a feature extractor $\mathbf{z} = f_{\theta}(\mathbf{x})$ that spans from the input layer to the last batch norm layer and that performs batch normalization, and an anomaly score $S(\mathbf{z})$ that covers all the remaining layers. We use $P_{j}^{z}$ to denote the data distribution $P_{j}$ transformed by the feature extractor $f_{\theta}$ . We assume that $P_{j}^{z}$ satisfies $\mathbb{E}_{\mathbf{z}\sim P_j^z}[\mathbf{z}] = 0$ and $\mathrm{Var}_{\mathbf{z}\sim P_j^z}[\mathbf{z}] = 1$ for $j = 1,\dots ,K,*$ because $f_{\theta}$ ends up with a batch norm layer. + +Theorem 2.1. Assume the mini-batches are large enough such that, for batches from each given distribution $P_{j}$ , the mini-batch means and variances are approximately constant across batches. Furthermore, assume the loss function $L[S(\mathbf{z})]$ is bounded by $C$ for any $\mathbf{z}$ . Let $\| \cdot \|_{TV}$ denote the total variation. Then, the generalization error is upper bounded by + +$$ +\left| \mathbb {E} _ {\mathbf {x} _ {\mathcal {B}} \sim P _ {*}} \left[ \frac {1}{B} \sum_ {i = 1} ^ {B} L [ S _ {\theta} ^ {i} (\mathbf {x} _ {\mathcal {B}}) ] \right] - \frac {1}{K} \sum_ {j = 1} ^ {K} \mathbb {E} _ {\mathbf {x} _ {\mathcal {B}} \sim P _ {j}} \left[ \frac {1}{B} \sum_ {i = 1} ^ {B} L [ S _ {\theta} ^ {i} (\mathbf {x} _ {\mathcal {B}}) ] \right] \right| \leq C \left\| P _ {*} ^ {z} - \frac {1}{K} \sum_ {j = 1} ^ {K} P _ {j} ^ {z} \right\| _ {T V}. +$$ + +The proof is shown in Supp. B. Note that Thm. 2.1 still holds if $P_{j}$ or $P_{*}$ are contaminated distributions $P_{j}^{\pi}$ or $P_{*}^{\bar{\pi}}$ . + +Remark. Thm. 2.1 suggests that the generalization error of the expected loss function is bounded by the total variation distance between $P_{*}^{z}$ and $\frac{1}{K}\sum_{j = 1}^{K}P_{j}^{z}$ . While we leave a formal bound of the TV distance to future studies, the following intuition holds: since $f_{\theta}$ contains batch norm layers, the empirical distributions $\frac{1}{K}\sum_{j = 1}^{K}P_{j}^{z}$ and $P_{*}^{z}$ will share the same (zero) mean and (unit) variance. If both distributions are dominated by their first two moments, we can expect the total variation distance to be small, providing an explanation for the approach's favorable generalization performance. + +# 3 Related Work + +Deep AD. Many recent advances in AD are built on deep learning methods [65] and early strategies used autoencoder [9, 55, 85] or density-based [13, 66] models. Another pioneering stream of research combined one-class classification [70] with deep learning [57, 63]. Many other approaches to deep AD are self-supervised, employing a self-supervised loss function to train the detector and score anomalies [4, 23, 31, 44, 56, 68, 72, 75]. + +All of these approaches assume that the data distribution will not change too much at test time. However, in many practical scenarios, there will be significant shifts in the abnormal distribution and even the normal distribution. For example, Dragoi et al. [17] observed that existing AD methods fail in detecting anomalies when distribution shifts occur in network intrusion detection. Another line of work in this context requires test-time modeling for the entire test set, e.g., COPOD [47], ECOD [48], and robust autoencoder [85], preventing real-time deployment. + +Few-shot AD. Several recent works have studied adapting an anomaly detector to shifts by fine-tuning a few test samples. One stream of research applies model-agnostic meta learning (MAML) [21] to various deep AD models, including one-class classification [22], generative adversarial networks [52], autoencoder [78], graph deviation networks [16], and supervised classifiers [20, 84]. Some approaches extend prototypical networks to few-shot AD [8, 40]. Kozerawski and Turk [39] learn a linear SVM with a few samples on top of a frozen pre-trained feature extractor, while Sheynin et al. [73] learn a hierarchical generative model from a few normal samples for image AD. Wang et al. [77] learn an energy model for AD. The anomalies are scored by the error of reconstructing their embeddings from a set of normal features that are adapted with a few test samples. Huang et al. [32] learn a category-agnostic model with multiple training categories (a meta set). At test time, a few normal samples from a novel category are used to establish an anomaly detector in the feature space. Huang et al. [32] does not exploit the presence of a meta-set to learn a stronger anomaly detector through synthetic outlier exposure. While meta-training for object-level anomaly detection (e.g., [22]) is generally simpler (it is easy to find anomaly examples, i.e., other objects different from the normal one), meta-training for anomaly segmentation (e.g., [32]) poses a harder task since image defects may differ from object to object (e.g., defects in transistors may not easily generalize to subtle defects in wood textures). Our experiments found that using images from different distributions as example anomalies during training is helpful for anomaly segmentation on MVTec-AD (see Supp. I.5). + +In contrast to all of the existing few-shot AD methods, we propose a zero-shot AD method and demonstrate that the learned AD model can adapt itself to new tasks without any support samples. + +Zero-shot AD. Foundation models pre-trained on massive training samples have achieved remarkable results on zero-shot tasks on images [37, 59, 82, 83]. For example, contrastive language-image pre-training (CLIP) [59] is a pre-trained language-vision model learned by aligning images and their paired text descriptions. One can achieve zero-shot image classification with CLIP by searching for the best-aligned text description of the test images. Esmaeilpour et al. [18] extend CLIP with a learnable text description generator for out-of-distribution detection. Liznerski et al. [51] apply CLIP for zero-shot AD and score the anomalies by comparing the alignment of test images with the correct text description of normal samples. In terms of anomaly segmentation, Trans-MM [7] is an interpretation method for Transformer-based architectures. Trans-MM uses the attention map to generate pixel-level masks of input images, which can be applied to CLIP. MaskCLIP [86] directly exploits CLIP's Transformer layer potential in semantic segmentation to generate pixel-level predictions given class descriptions. MAEDAY [71] uses the reconstruction error of a pre-trained masked autoencoder [28] to generate anomaly segmentation masks. WinCLIP [36], again using CLIP, slides a window over an image and inspects each patch to detect local defects defined by text descriptions. + +However, foundation models have two constraints that do not exist in ACR. First, foundation models are not available for all data types. Foundation models do not exist for example for tabular data, which occurs widely in practice, for example in applications such as network security and industrial fault detection. Also, existing adaptations of foundation models for AD (e.g., CLIP) may generalize poorly to specific domains that have not been covered in their massive training samples. For example, Liznerski et al. [51] observed that CLIP performs poorly on non-natural images, such as MNIST digits. In contrast, ACR does not rely on a powerful pre-trained foundation model, enabling zero-shot AD on various data types. Second, human involvement is required for foundation models. While previous + +pre-trained CLIP-based zero-shot AD methods adapt to new tasks through informative prompts given by human experts, our method enriches the zero-shot AD toolbox with a new adaptation strategy without human involvement. Our approach allows the anomaly detector to infer the new task/distribution based on a mini-batch of samples. + +Connections to Other Areas. Our problem setup and assumptions share similarities with other research areas but differences are also pronounced. Those areas include test-time adaptation [10, 49, 53, 67, 76], unsupervised domain adaptation [38], zero-shot classification [79], meta-learning [21], and contextual AD [25]. Supp. H details the connections, similarities, and differences. + +# 4 Experiments + +We evaluate the proposed method ACR on both image (detection/segmentation) and tabular data, where distribution shifts occur at test time. We compare ACR with established baselines based on deep AD, zero-shot AD, and few-shot AD methods. The experiments show that our method is suitable for different data types, applicable to diverse AD models, robust to various anomaly ratios, and significantly outperforms existing baselines. We report results on image and tabular data in Sec. 4.1 and Sec. 4.2, and ablation studies in Sec. 4.3. Results on more datasets are in Supps. I.3 to I.6. + +# 4.1 Experiments on Images + +Visual AD consists of two major tasks: (image-level) anomaly detection and (pixel-level) anomaly segmentation. The former aims to accurately detect images of abnormal objects, e.g., detecting non-dog images; the latter focuses on detecting pixel-level local defects in an image, e.g., marking board wormholes. We test our method on both tasks and compare it to existing SOTA methods. + +# 4.1.1 Anomaly Detection + +We evaluate ACR on images when applied to two simple backbone models: DSVDD [63] and a binary classifier. Our method is trained from scratch. The evaluation demonstrates that ACR achieves superior AD results on corrupted natural images, medical images, and other non-natural images. + +Datasets. We study four image datasets: CIFAR100-C [29], OrganA [81] (and MNIST [42], and Omniglot [41] in Supp. I.4). We consider CIFAR100-C is the noise-corrupted version of CIFAR100's test data, thus considered as distributionally shifted data. We train using all training images from original CIFAR100 and test all models on CIFAR100-C. OrganA is a medical image dataset with 11 classes (for various body organs). We leave two successive classes out for testing and use the other classes for training. We repeat the evaluation on all combinations of two consecutive classes. Across all experiments, we apply the "one-vs-rest" setting at test time, i.e., one class is treated as normal, and all the other classes are abnormal [65]. We report the results averaged over all combinations. + +Baselines. We compare our proposed method with a SOTA stationary deep anomaly detector (anomaly detection with an inductive bias (ADIB) [14]), a pre-trained classifier used for batch-level zero-shot AD (ResNet152 [27]), a SOTA zero-shot AD baseline (CLIP-AD [51]), and a few-shot AD baseline (one-class model-agnostic meta learning (OC-MAML) [22]). ResNet152-I and ResNet152-II differ in the which statistics they use in batch normalization: ResNet152-I uses the statistics from training and ResNet152-II uses the input batch's statistics. See Supp. E for more details. + +Implementation Details. We set $\pi = 0.8$ in Eq. (6) to apply Meta Outlier Exposure. For each approach, we train a single model and test it on different anomaly ratios. Two backbone models are implemented: DSVDD [63] (ACR-DSVDD) and a binary classifier with cross entropy loss (ACR-BCE). More details are given in Supp. F. + +Results. We report the results in terms of the AUROC averaged over five independent test runs with standard deviation. We apply the model to tasks with different anomaly ratios to study the robustness of ACR to the anomaly ratio at test time. Our method ACR significantly outperforms all baselines on Gaussian noise-corrupted CIFAR100-C and OrganA in Tab. 1. In Tabs. 8 and 9 in Supp. I, we systematically evaluate all methods on all 19 corrupted versions of CIFAR100 and on non-nature images (MNIST, Omniglot). The results show that on non-natural images (OrganA, MNIST, Omniglot) ACR performs the best among all compared methods, including the large pre-trained CLIP-AD baseline; on corrupted natural images (CIFAR-100C), ACR achieves results competitive + +Table 1: AUC (%) with standard deviation for anomaly detection on CIFAR100-C with Gaussian noise [29] and medical image dataset, OrganA. ACR with both backbone models perform best. + +
CIFAR100-COrganA
1%5%10%20%1%5%10%
ADIB [14]50.9±2.450.5±0.950.6±0.950.2±0.549.9±6.350.3±2.450.2±1.3
ResNet152-I [27]75.6±2.373.2±1.373.2±0.869.9±0.654.2±1.153.9±0.553.2±0.6
ResNet152-II [27]62.5±3.161.8±1.761.2±0.660.2±0.454.2±1.753.5±0.852.9±0.3
OC-MAML [22]53.0±3.654.1±1.955.8±0.657.1±1.073.7±4.772.2±2.674.2±2.4
CLIP-AD [51]82.3±1.182.6±0.982.3±0.982.6±0.152.6±0.851.9±0.651.5±0.2
ACR-DSVDD (ours)87.7±1.486.3±0.985.9±0.485.6±0.479.0±1.077.7±0.476.3±0.3
ACR-BCE (ours)84.3±2.286.0±0.386.0±0.285.7±0.481.1±0.879.5±0.478.3±0.3
+ +Table 2: Pixel-level and image-level AUC (%) on MVtec AD. On average, our method outperforms the strongest baseline WinCLIP by $7.4\%$ AUC in pixel-level anomaly segmentation. + +
MAEDAY [71]CLIP [59]Trans-MM [7]MaskCLIP [86]WinCLIP [36]ACR (ours)
pixel-level69.4-57.5±0.063.7±0.085.1±0.092.5±0.2
image-level74.574.0±0.0--91.8±0.085.8±0.6
+ +with CLIP-AD and significantly outperforms other baselines. ACR is also robust on various anomaly ratios: without any (hyper)parameter tuning, the results are consistent and don't vary over $3\%$ . The deep AD baseline, ADIB, doesn't have adaptation ability and thus fails to perform the testing tasks, leading to random guess results. Pre-trained ResNet152 armed with batch normalization layers can adapt but with limited ability, which is in contrast with our method that directly learns to adapt. Few-shot OC-MAML suffers because it requires a large support set at test time to achieve adaptation effectively. CLIP-AD has a strong performance on corrupted natural images but struggles with non-natural images, presumably because it is trained on massive natural images from the internet. + +# 4.1.2 Anomaly Segmentation + +We benchmark our method ACR on the MVTec AD dataset [5] in a zero-shot setup. Experiments show that ACR achieves new state-of-the-art anomaly segmentation performance. + +Datasets. MVtec AD comprises 15 classes of images for industrial inspection. The goal is to detect the local defects accurately. To implement our method for zero-shot anomaly segmentation tasks, we train on the training sets of all classes except the target one and test on the test set of the target class. For example, when segmenting wormholes on wood boards, we train a model on the other 14 classes' training data except for wood and later test on wood test set. This satisfies the zero-shot definition as the model doesn't see any wood data during training. We apply this procedure for all classes. + +Baselines. We compare our method to four zero-shot anomaly segmentation baselines: Trans-MM [7], MaskCLIP [86], MAEDAY [71], and WinCLIP [36]. The details are described in Sec. 3. We report their results listed in Jeong et al. [36], Schwartz et al. [71]. + +Implementation Details. We first extract informative texture features using a sliding window, which corresponds to 2D convolutions. The convolution kernel is instantiated with the ones in a pre-trained ResNet. We follow the same data pre-processing steps of Cohen and Hoshen [12], Defard et al. [15], Rippel et al. [62] to extract the features (the third layer's output in our case) of WideResNet-50-2 pre-trained on ImageNet. Second, we detect anomalies in the extracted features in each window position with our ACR method. Specifically, each window position corresponds to one image patch. We stack into a batch the patches taken from a set of images that all share the same spatial position. For example, we may stack the top-left patch of all testing wood images into a batch and use ACR to detect anomalies in that batch. Finally, the window-wise anomaly scores are bilinearly interpolated to the original image size to get the pixel-level anomaly scores. In implementing meta outlier exposure, we tried two sources of outliers: one is noise-corrupted images, and the other is images of other classes. We report results of the former in the main paper and the latter in Supp. I.5. More implementation details are given in Supp. F. + +Results. Similar to common practice, we report both the pixel-level and image-level results in Tab. 2. We use the largest pixel-level anomaly score as the image-level score. All methods are evaluated with + +Table 3: AUC (%) with standard deviation for anomaly detection on Anosift with different anomaly contamination rations (1% - 20%) and on different splitting strategies AVG and FAR [17]. ACR with either backbone model outperforms all baselines. Especially, under the distribution shift occurring in the FAR split, ACR is the only method that is significantly better than random guessing. + +
1%5%10%20%
FARAVGFARAVGFARAVGFARAVG
OC-SVM [69]49.6±0.262.6±0.149.6±0.262.6±0.149.5±0.162.7±0.149.5±0.162.6±0.1
IForest [50]25.8±0.454.6±0.226.1±0.154.7±0.126.0±0.154.6±0.126.0±0.154.7±0.1
LOF [6]37.3±0.559.6±0.337.0±0.159.5±0.137.0±0.159.5±0.137.1±0.159.5±0.1
KNN [61]45.0±0.370.8±0.145.3±0.270.9±0.145.1±0.170.8±0.145.2±0.170.8±0.1
DSVDD [63]34.6±0.362.3±0.234.7±0.162.5±0.134.7±0.262.5±0.134.7±0.162.5±0.1
AE [1]18.6±0.225.3±0.118.7±0.225.5±0.118.7±0.125.5±0.118.7±0.125.5±0.1
LUNAR [24]24.5±0.438.3±0.424.6±0.138.6±0.224.7±0.138.7±0.124.6±0.138.6±0.1
ICL [72]20.6±0.350.5±0.220.7±0.250.4±0.120.7±0.150.4±0.120.8±0.150.4±0.1
NTL [56]40.7±0.357.0±0.140.9±0.257.1±0.141.0±0.157.1±0.141.0±0.157.1±0.1
BERT-AD[17]28.6±0.364.6±0.228.7±0.164.6±0.128.7±0.164.6±0.128.7±0.164.7±0.1
ACR-DSVDD (ours)62.0±0.574.0±0.261.3±0.173.3±0.160.4±0.172.5±0.159.1±0.171.2±0.1
ACR-NTL (ours)62.5±0.273.4±0.162.2±0.173.2±0.162.3±0.173.1±0.162.0±0.172.7±0.1
+ +the AUROC metric. It shows that 1) our method is competitive to the SOTA method in image-level detection tasks, and 2) it surpasses the best baseline WinCLIP by a large margin (7.4% AUC on average) in anomaly segmentation tasks, achieving a new SOTA performance and testifying the potential of our method. We report class-wise results in Supp. I.5. + +# 4.2 Experiments on Tabular Data + +Tabular data is an important data format in many real-world AD applications, e.g., network intrusion detection and malware detection. Distribution shifts in such data occur naturally over time (e.g., as new malware emerges) and grow over time. Existing zero-shot AD approaches [36, 51] are not applicable to tabular data. We evaluate ACR on tabular AD when applied to DSVDD and NTL. ACR achieves a new SOTA of zero-shot AD performance on tabular data with temporal distribution shifts. + +Datasets. We evaluate all methods on two real-world tabular AD datasets Anoshift [17] and Malware [34] where data shifts over time. Anoshift is a data traffic dataset for network intrusion detection collected over ten years (2006-2015). We follow the preprocessing procedure and train/test split suggested in Dragoi et al. [17]. We train the model on normal data collected from 2006 to $2010^5$ , and test on a mixture of normal and abnormal samples (with anomaly ratios varying from $1\%$ to $20\%$ ) collected from 2011 to 2015. We also apply similar protocols on Malware [34], a dataset for detecting malicious computer programs, and provide details in Supp. I.6. + +Baselines. We compare with state-of-the-art deep and shallow detectors for tabular AD [2, 17, 26] and study their performance under test distribution shifts. The shallow AD baselines include OC-SVM [69], IForest [50], LOF [6], and KNN [61]. The deep AD baselines include DSVDD [63], Autoencoder (AE) [1], LUNAR [24], internal contrastive learning (ICL) [72], NTL [56], and BERT-AD [17]. We adopt the implementations from PyOD [26] or their official repositories. + +Implementation Details. To formulate meta-training sets, we bin the data against their timestamps (year for Anoshift and month for Malware) so each bin corresponds to one training distribution $P_{j}$ . The training tasks are mixed with normality ratio $\pi = 0.8$ . To create more training tasks, we augment the data using attribute permutations, resulting in additional training distributions. These attribute permutations increase the variability of training tasks and encourage the model to learn permutation-invariant features. At test time, the attributes are not permuted. Details are in Supp. F. + +Results. In Tab. 3, we report the results on Anoshift split into AVG (data from 2011 to 2015) and FAR (data from 2014 and 2015). The two splits show how the performance degrades from average (AVG) to when strong distribution shifts happen after a long time interval (FAR). The results of Malware with varying ratios are in Tab. 12 and Supp. I.6. We report average AUC with standard deviation over five independent test runs. The results on Anoshift and Malware show that ACR outperforms all baselines on all distribution-shifted settings. Remarkably, ACR is the only method + +that clearly outperforms random guessing on shifted datasets (the FAR split in Anoshift and the test split in Malware). All baselines perform worse than random on shifted test sets even though they achieve strong results when there are no distribution shifts (see results in Alvarez et al. [2], Dragoi et al. [17], Han et al. [26]). This worse-than-random phenomenon is also verified in the benchmark paper AnoShift [17]. The reason is that in cyber-security applications (e.g., Anoshift and Malware), the attacks evolve adversarially. The anomalies (cyber attacks) are intentionally updated to be as similar to the normal data to spoof the firewalls. That's why static AD methods like KNN flip their predictions during test time and achieve worse than random performance. In terms of robustness, although ACR-DSVDD's performance degrades a little (within $3\%$ ) when the anomaly ratio increases, ACR-NTL is fairly robust to high anomaly ratios. The degradation is attributed to the fact that the majority of normal samples get blurred as the anomaly ratio increase, leading to noisy batch statistics. + +# 4.3 Ablation Studies + +We perform several ablation studies in Supp. I.1, including 1) demonstrating the benefit of the Meta Outlier Exposure loss, 2) studying the effect of batch normalization, and 3) analyzing the effects of the batch sizes and the number of meta-training classes. To show that Meta Outlier Exposure is a favorable option, we compare it against the one-class classification loss and a fine-tuned version of ResNet152 on domain-specific training data. Tab. 4 shows that our approach outperforms the two alternatives on two image datasets. To analyze the effect of batch normalization, we adjust batch normalization usage during training and testing listed in Tab. 5. More details and the studies on the batch size, the number of meta-training classes, other normalization techniques (LayerNorm, InstanceNorm, and GroupNorm), effects of batch norm position, and robustness of the mixing hyperparameter $\pi$ can be found in Supp. I.1. + +# 5 Conclusion + +We studied the problem of adapting a learned AD method to a new data distribution, where the concept of "normality" changed. Our method is a zero-shot approach and requires no training or fine-tuning to a new data set. We developed a new meta-training approach, where we trained an off-the-shelf deep AD method on a (meta-) set of interrelated datasets, adopting batch normalization in every layer, and used samples from the meta set as either normal samples and anomalies, depending on the context. We showed that the approach robustly generalized to new, unseen anomalies. + +Our experiments on image and tabular data demonstrated superior zero-shot adaptation performance when no foundation model was available. We stress that this is an important result since many, if not most AD applications in the real world rely on specialized datasets: medical images, data from industrial assembly lines, malware data, network intrusion data etc. Existing foundation models often do not capture these data, as we showed. Ultimately, our analysis shows that relatively small modifications to model training (meta-learning, batch normalization, and providing artificial anomalies from the meta-set) will enable the deployment of existing models in zero-shot AD tasks. + +Limitations & Societal Impacts Our method depends on the three assumptions listed in Sec. 2. If those assumptions are broken, zero-shot adaptation cannot be assured. + +Anomaly detectors are trained to detect atypical/under-represented data in a data set. Therefore, deploying an anomaly detector, e.g., in video surveillance, may ultimately discriminate against under-represented groups. Anomaly detection methods should therefore be critically reviewed when deployed on human data. + +# Acknowledgements + +SM acknowledges support by the National Science Foundation (NSF) under an NSF CAREER Award, award numbers 2003237 and 2007719, by the Department of Energy under grant DE-SC0022331, by the IARPA WRIVA program, and by gifts from Qualcomm and Disney. Part of this work was conducted within the DFG research unit FOR 5359 on Deep Learning on Sparse Chemical Process Data. PS was supported by the US National Science Foundation under awards 1900644 and 1927245 and by the National Institute of Health under awards R01-AG065330-02S1 and R01-LM013344. SM and PS were supported by the Hasso Plattner Institute (HPI) Research Center in Machine Learning + +and Data Science at the University of California, Irvine. MK acknowledges support by the Carl-Zeiss Foundation, the DFG awards KL 2698/2-1, KL 2698/5-1, KL 2698/6-1, and KL 2698/7-1, and the BMBF awards 03|B0770E and 01|S21010C. We thank Eliot Wong-Toi for helpful feedback on the manuscript. + +The Bosch Group is carbon neutral. Administration, manufacturing and research activities do no longer leave a carbon footprint. This also includes GPU clusters on which the experiments have been performed. + +# References + +[1] Charu C Aggarwal. An introduction to outlier analysis. In Outlier analysis, pages 1-34. Springer, 2017. +[2] Maxime Alvarez, Jean-Charles Verdier, D'Jeff K Nkashama, Marc Frappier, Pierre-Martin Tardif, and Froduald Kabanza. A revealing large-scale evaluation of unsupervised anomaly detection algorithms. arXiv preprint arXiv:2204.09825, 2022. +[3] Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12:149-198, 2000. +[4] Liron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. In International Conference on Learning Representations, 2020. +[5] Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592-9600, 2019. +[6] Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jorg Sander. Lof: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pages 93-104, 2000. +[7] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 397-406, 2021. +[8] Bingqing Chen, Luca Bondi, and Samarjit Das. Learning to adapt to domain shifts with few-shot samples in anomalous sound detection. In 2022 26th International Conference on Pattern Recognition (ICPR), pages 133-139. IEEE, 2022. +[9] Xiaoran Chen and Ender Konukoglu. Unsupervised detection of lesions in brain mri using constrained adversarial auto-encoders. In MIDL Conference book. MIDL, 2018. +[10] Sungha Choi, Seunghan Yang, Seokeon Choi, and Sungrack Yun. Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, pages 440–458. Springer, 2022. +[11] Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emmist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pages 2921-2926. IEEE, 2017. +[12] Niv Cohen and Yedid Hoshen. Sub-image anomaly detection with deep pyramid correspondences. arXiv preprint arXiv:2005.02357, 2020. +[13] Lucas Deecke, Robert Vandermeulen, Lukas Ruff, Stephan Mandt, and Marius Kloft. Image anomaly detection with generative adversarial networks. In Joint European conference on machine learning and knowledge discovery in databases, pages 3-17. Springer, 2018. +[14] Lucas Deecke, Lukas Ruff, Robert A Vandermeulen, and Hakan Bilen. Transfer-based semantic anomaly detection. In International Conference on Machine Learning, pages 2546-2558. PMLR, 2021. + +[15] Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part IV, pages 475–489. Springer, 2021. +[16] Kaize Ding, Qinghai Zhou, Hanghang Tong, and Huan Liu. Few-shot network anomaly detection via cross-network meta-learning. Proceedings of the Web Conference 2021, 2021. +[17] Marius Dragoi, Elena Burceanu, Emanuela Haller, Andrei Manolache, and Florin Brad. Anoshift: A distribution shift benchmark for unsupervised anomaly detection. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. +[18] Sepideh Esmaeilpour, Bing Liu, Eric Robertson, and Lei Shu. Zero-shot out-of-distribution detection based on the pretrained model clip. In Proceedings of the AAAI conference on artificial intelligence, 2022. +[19] Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Generalization of model-agnostic meta-learning algorithms: Recurring and unseen tasks. Advances in Neural Information Processing Systems, 34:5469–5480, 2021. +[20] Tongtong Feng, Q. Qi, Jingyu Wang, and Jianxin Liao. Few-shot class-adaptive anomaly detection with model-agnostic meta-learning. 2021 IFIP Networking Conference (IFIP Networking), pages 1–9, 2021. +[21] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126–1135. PMLR, 2017. +[22] Ahmed Frikha, Denis Krompaß, Hans-Georg Köpken, and Volker Tresp. Few-shot one-class classification via meta-learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8):7448-7456, May 2021. +[23] Izhak Golan and Ran El-Yaniv. Deep anomaly detection using geometric transformations. In Advances in Neural Information Processing Systems, pages 9758-9769, 2018. +[24] Adam Goodge, Bryan Hooi, See-Kiong Ng, and Wee Siong Ng. Lunar: Unifying local outlier detection methods via graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6737-6745, 2022. +[25] Manish Gupta, Jing Gao, Charu C Aggarwal, and Jiawei Han. Outlier detection for temporal data: A survey. IEEE Transactions on Knowledge and data Engineering, 26(9):2250-2267, 2013. +[26] Songqiao Han, Xiyang Hu, Hailiang Huang, Minqi Jiang, and Yue Zhao. Adbench: Anomaly detection benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. +[27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. +[28] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000-16009, 2022. +[29] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, 2019. +[30] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2018. +[31] Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. Advances in Neural Information Processing Systems, 32:15663-15674, 2019. + +[32] Chaoqin Huang, Haoyan Guan, Aofan Jiang, Ya Zhang, Michael Spratling, and Yan-Feng Wang. Registration based few-shot anomaly detection. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIV, pages 303-319. Springer, 2022. +[33] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. +[34] Ngoc Anh Huynh, Wee Keong Ng, and Kanishka Ariyapala. A new adaptive learning algorithm and its application to online malware detection. In International Conference on Discovery Science, pages 18-32. Springer, 2017. +[35] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448-456. pmlr, 2015. +[36] Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero-/few-shot anomaly classification and segmentation. 2023. +[37] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021. +[38] Wouter M Kouw and Marco Loog. A review of domain adaptation without target labels. IEEE transactions on pattern analysis and machine intelligence, 43(3):766-785, 2019. +[39] Jedrzej Kozerawski and Matthew A. Turk. Clear: Cumulative learning for one-shot one-class image recognition. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3446-3455, 2018. +[40] Anna Kruspe. One-way prototypical networks. ArXiv, abs/1906.00820, 2019. +[41] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015. +[42] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +[43] Aodong Li, Alex Boyd, Padhraic Smyth, and Stephan Mandt. Detecting and adapting to irregular distribution shifts in bayesian online learning. Advances in neural information processing systems, 34:6816-6828, 2021. +[44] Aodong Li, Chen Qiu, Padhraic Smyth, Marius Kloft, Stephan Mandt, and Maja Rudolph. Deep anomaly detection under labeling budget constraints. arXiv preprint arXiv:2302.07832, 2023. +[45] Chun-Liang Li, Kihyuk Sohn, Jinsung Yoon, and Tomas Pfister. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9664-9674, 2021. +[46] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. +[47] Zheng Li, Yue Zhao, Nicola Botta, Cezar Ionescu, and Xiyang Hu. Copod: copula-based outlier detection. In 2020 IEEE international conference on data mining (ICDM), pages 1118-1123. IEEE, 2020. +[48] Zheng Li, Yue Zhao, Xiyang Hu, Nicola Botta, Cezar Ionescu, and George Chen. Ecod: Unsupervised outlier detection using empirical cumulative distribution functions. IEEE Transactions on Knowledge and Data Engineering, 2022. + +[49] Hyesu Lim, Byeonggeun Kim, Jaegul Choo, and Sungha Choi. Ttn: A domain-shift aware batch normalization in test-time adaptation. In The Eleventh International Conference on Learning Representations, 2023. +[50] Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation-based anomaly detection. ACM Transactions on Knowledge Discovery from Data (TKDD), 6(1):1-39, 2012. +[51] Philipp Liznerski, Lukas Ruff, Robert A Vandermeulen, Billy Joe Franks, Klaus Robert Muller, and Marius Kloft. Exposing outlier exposure: What can be learned from few, one, and zero outlier images. Transactions on Machine Learning Research, 2022. +[52] Yiwei Lu, Frank Yu, Mahesh Kumar Krishna Reddy, and Yang Wang. Few-shot scene-adaptive anomaly detection. In ECCV, 2020. +[53] Zachary Nado, Shreyas Padhy, D Sculley, Alexander D'Amour, Balaji Lakshminarayanan, and Jasper Snoek. Evaluating prediction-time batch normalization for robustness under covariate shift. arXiv preprint arXiv:2006.10963, 2020. +[54] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018. +[55] Emanuele Principi, Fabio Vesperini, Stefano Squartini, and Francesco Piazza. Acoustic novelty detection with adversarial autoencoders. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 3324-3330. IEEE, 2017. +[56] Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, and Maja Rudolph. Neural transformation learning for deep anomaly detection beyond images. In International Conference on Machine Learning, pages 8703-8714. PMLR, 2021. +[57] Chen Qiu, Marius Kloft, Stephan Mandt, and Maja Rudolph. Raising the bar in graph-level anomaly detection. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 2196–2203, 2022. +[58] Chen Qiu, Aodong Li, Marius Kloft, Maja Rudolph, and Stephan Mandt. Latent outlier exposure for anomaly detection with contaminated data. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 18153-18167. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/qiu22b.html. +[59] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. +[60] Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dólár. Designing network design spaces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10428-10436, 2020. +[61] Sridhar Ramaswamy, Rajeev Rastogi, and Kyuseok Shim. Efficient algorithms for mining outliers from large data sets. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pages 427-438, 2000. +[62] Oliver Rippel, Patrick Mertens, and Dorit Merhof. Modeling the distribution of normal data in pre-trained deep features for anomaly detection. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 6726-6733. IEEE, 2021. +[63] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In International conference on machine learning, pages 4393-4402. PMLR, 2018. +[64] Lukas Ruff, Robert A Vandermeulen, Nico Gornitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, and Marius Kloft. Deep semi-supervised anomaly detection. In International Conference on Learning Representations, 2019. + +[65] Lukas Ruff, Jacob R Kauffmann, Robert A Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G Dietterich, and Klaus-Robert Müller. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE, 2021. +[66] Thomas Schlegl, Philipp Seebock, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pages 146-157. Springer, 2017. +[67] Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. Advances in Neural Information Processing Systems, 33:11539-11551, 2020. +[68] Tim Schneider, Chen Qiu, Marius Kloft, Decky Aspandi Latif, Steffen Staab, Stephan Mandt, and Maja Rudolph. Detecting anomalies within time series using local neural transformations. arXiv preprint arXiv:2202.03944, 2022. +[69] Bernhard Scholkopf, Robert C Williamson, Alex Smola, John Shawe-Taylor, and John Platt. Support vector method for novelty detection. Advances in neural information processing systems, 12, 1999. +[70] Bernhard Schölkopf, John C Platt, John Shawe-Taylor, Alex J Smola, and Robert C Williamson. Estimating the support of a high-dimensional distribution. Neural computation, 13(7):1443-1471, 2001. +[71] Eli Schwartz, Assaf Arbelle, Leonid Karlinsky, Sivan Harary, Florian Scheidegger, Sivan Doveh, and Raja Giryes. Maeday: Mae for few and zero shot anomaly-detection. arXiv preprint arXiv:2211.14307, 2022. +[72] Tom Shenkar and Lior Wolf. Anomaly detection for tabular data with internal contrastive learning. In International Conference on Learning Representations, 2021. +[73] Shelly Sheynin, Sagie Benaim, and Lior Wolf. A hierarchical transformation-discriminating generative model for few shot anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8495-8504, October 2021. +[74] Yaniv Shulman. Unsupervised contextual anomaly detection using joint deep variational generative models. arXiv preprint arXiv:1904.00548, 2019. +[75] Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, and Tomas Pfister. Learning and evaluating representations for deep one-class classification. In International Conference on Learning Representations, 2020. +[76] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In International Conference on Learning Representations, 2021. +[77] Ze Wang, Yipin Zhou, Rui Wang, Tsung-Yu Lin, Ashish Shah, and Ser-Nam Lim. Few-shot fast-adaptive anomaly detection. In Advances in Neural Information Processing Systems, 2022. +[78] Jhih-Ciang Wu, Ding-Jie Chen, Chiou-Shann Fuh, and Tyng-Luh Liu. Learning unsupervised metaformer for anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4369-4378, October 2021. +[79] Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 41(9):2251-2265, 2018. +[80] Zehao Xiao, Xiantong Zhen, Shengcai Liao, and Cees GM Snoek. Energy-based test sample adaptation for domain generalization. In The Eleventh International Conference on Learning Representations, 2023. +[81] Jiancheng Yang, Rui Shi, and Bingbing Ni. Medmnist classification decathlon: A lightweight automl benchmark for medical image analysis. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pages 191-195. IEEE, 2021. + +[82] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research, 2022. +[83] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432, 2021. +[84] Shen Zhang, Fei Ye, Bingnan Wang, and Thomas G. Habetler. Few-shot bearing anomaly detection via model-agnostic meta-learning. 2020 23rd International Conference on Electrical Machines and Systems (ICEMS), pages 1341–1346, 2020. +[85] Chong Zhou and Randy C Paffenroth. Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 665-674, 2017. +[86] Chong Zhou, Chen Change Loy, and Bo Dai. Denseclip: Extract free dense labels from clip. arXiv preprint arXiv:2112.01071, 2021. + +# A Justifications of Assumptions A1-A3 + +As follows, we provide justifications for assumptions A1-A3. Following the justification, we also discuss possibilities to remove or mitigate the assumptions. + +A1 Assuming an available meta-training set is widely adopted in few-shot learning or meta-learning [21, 22, 32, 54] and domain generalization [46, 80]. In practice, the meta-training set can be generated using available covariates. For example, for our tabular data experiment, we used the timestamps; in medical data, one could use data collected from different hospitals or different patients to obtain separate sets for meta-training; and in MVtec-AD, we used the other training classes except for the target class to form the training set. We also provided an ablation study on the number of classes in the meta-training set (Tab. 7). We found even in the extreme case where we only have one data class in the training set, the trained model still provides meaningful results. + +There are multiple ways to mitigate this assumption. If one does not have a meta-training set at hand, one can train their model on a different but related dataset, e.g., train on Omniglot but test on MNIST (see results below under this setting. We still get decent AUC results on MNIST). + +
Anomaly ratio1%5%10%20%
AUROC84.4±2.485.2±2.584.3±2.582.2±2.4
+ +A2 Batch-level prediction is a common assumption used in robustness literature [10, 49, 53, 67, 76]. In addition, batch-level predictions are widely used in real life. For example, people examine Covid19 test samples at a batch level out of economic and time-efficiency considerations6. To relax this assumption, our method can easily be extended to score individual data by presetting the sample mean and variance in BatchNorm layers with a collection of data. These moments are then fixed when predicting new individual data. Empirically, to understand the impacts of the batch size on the prediction performance, we conducted an ablation study with as small a batch size as three in the experiments. + +A3 Besides being supported by the intuition that anomalies are rare, this is consistent with most of the data used in the literature. ADBench7 has 57 anomaly detection datasets (with an average anomaly ratio of $5\%$ ), all matching our assumption that the normal data take the majority in each dataset. + +We provide a simple mathematical argument for the validity of A3, showing that a mini-batch with a majority of anomalies is very unlikely to be drawn for a sufficiently large mini-batch size $B$ . Let $p < 1/2$ denote the fraction of anomalies among the data and define $\Delta = 1/2 - p > 0$ . For every data point $\mathbf{x}_i$ in the batch, let $y_i \sim \text{Bernoulli}(p)$ encode whether $\mathbf{x}_i$ is normal ( $y_i = 0$ ) or abnormal ( $y_i = 1$ ). The variable $S_B := y_1 + \dots + y_B$ thus counts the number of anomalies in the batch, so that $S_B < B/2$ means that the majority of the data is normal. We want to show that the violation of A3 is unlikely, that is, $P(S_B \geq B/2)$ is small. By Hoeffding's inequality, since $0 \leq y_i \leq 1$ for all $i$ , it follows that $P(S_B \geq B/2) = P(S_B - \mathbb{E}[S_B] \geq B/2 - Bp) \leq \exp\left(-2B(0.5 - p)^2\right) \leq \exp\left(-2B\Delta^2\right)$ , which converges to zero exponentially fast when $B \to \infty$ . + +# B Generalization to an Unseen Distribution $P_{*}$ + +This section aims to provide a proof for Thm. 2.1. Inspired by Fallah et al. [19], we derive an upper bound of the generalization error of our meta-training approach on unseen distributions. The error is described in terms of the data distributions transformed by batch-norm-involved feature extractors. + +Definition B.1. Given a sample space $\Omega$ and its $\sigma$ -field $\mathcal{F}$ , the total variation distance between two probability measure $P_{i}$ and $P_{j}$ defined on $\mathcal{F}$ is + +$$ +\left\| P _ {i} - P _ {j} \right\| _ {T V} = \sup _ {A \in \mathcal {F}} \left| P _ {i} (A) - P _ {j} (A) \right| = \sup _ {f: 0 \leq f \leq 1} \left| \mathbb {E} _ {x \sim P _ {i}} [ f (x) ] - \mathbb {E} _ {x \sim P _ {j}} [ f (x) ] \right| \tag {8} +$$ + +Now we split the neural network into two parts: the first part is the layers before (including) the last batch normalization layer, referred to as feature extractor $\mathbf{z} = f_{\theta}(\mathbf{x})$ , and the second part is the layers after the last batch normalization layer, namely the anomaly score map $S(\mathbf{z}) = S(f_{\theta}(\mathbf{x})) = S_{\theta}(\mathbf{x})$ . $S(\mathbf{z})$ may involve learnable parameters, but we omit the notations for conciseness. The split allows us to separate the effects of batch normalization layers on the generalization error of unseen distributions. + +Under the transformation of $f_{\theta}$ consisting of batch normalization layers, we have the data distribution transformed into the distribution of adaptatively centered representations + +$$ +P _ {j} (\mathbf {x}) \stackrel {\mathbf {z} = f _ {\theta} (\mathbf {x})} {\Longrightarrow} P _ {j} ^ {z} (\mathbf {z}), \quad j = 1, \dots , K, * \tag {9} +$$ + +resulting in $P_{j}^{z}$ with $\mathbb{E}_{P_j^z}[\mathbf{z}] = 0$ and $\mathrm{Var}_{P_j^z}[\mathbf{z}] = 1$ . + +Assume the mini-batch is large enough so that the mini-batch means and variances are approximately constant across batches, i.e., the batch statistics in batch normalization layers are equal to the population-truth values. Consequently, when $\mathbf{x}_1,\ldots ,\mathbf{x}_B\stackrel {\mathrm{i.i.d.}}{\sim}P_j$ which constitutes $\mathbf{x}_{\mathcal{B}}$ , their latent representations $\mathbf{z}_i\coloneqq \mathbf{f}_\theta^i (\mathbf{x}_B)\stackrel {\mathrm{i.i.d.}}{\sim}P_j^z$ . For $i = 1,\dots ,B$ . Then the expectation of the batch-level losses are + +$$ +\begin{array}{l} \mathbb {E} _ {\mathbf {x} _ {\mathcal {B}} \sim P _ {j}} \left[ \frac {1}{B} \sum_ {i = 1} ^ {B} L [ S _ {\theta} ^ {i} (\mathbf {x} _ {\mathcal {B}}) ] \right] = \mathbb {E} _ {\{\mathbf {z} _ {i} \sim P _ {j} ^ {z} \} _ {i = 1} ^ {B}} \left[ \frac {1}{B} \sum_ {i = 1} ^ {B} L [ S (\mathbf {z} _ {i}) ] \right] \\ = \frac {1}{B} \sum_ {i = 1} ^ {B} \mathbb {E} _ {\left\{\mathbf {z} _ {i} \sim P _ {j} ^ {z} \right\} _ {i = 1} ^ {B}} \left[ L \left[ S (\mathbf {z} _ {i}) \right] \right] \\ = \mathbb {E} _ {\mathbf {Z} \sim P _ {j} ^ {z}} [ L [ S (\mathbf {z}) ] ] \tag {10} \\ \end{array} +$$ + +Assumption B.2. For any parameters (if any), the loss function $L[S(\cdot)]$ is bounded by $C$ . + +We now quantify the generalization error to an unseen distribution $P_{*}$ by the difference between the expected loss of data batches of $P_{*}$ and the one of meta-training distribution. + +$$ +\begin{array}{l} \left| \mathbb {E} _ {\mathbf {x} _ {\mathcal {B}} \sim P _ {*}} \left[ \frac {1}{B} \sum_ {i = 1} ^ {B} L \left[ S _ {\theta} ^ {i} (\mathbf {x} _ {\mathcal {B}}) \right] \right] - \frac {1}{K} \sum_ {j = 1} ^ {K} \mathbb {E} _ {\mathbf {x} _ {\mathcal {B}} \sim P _ {j}} \left[ \frac {1}{B} \sum_ {i = 1} ^ {B} L \left[ S _ {\theta} ^ {i} (\mathbf {x} _ {\mathcal {B}}) \right] \right] \right| (11) \\ = \left| \mathbb {E} _ {\mathbf {z} \sim P _ {*} ^ {z}} [ L [ S (\mathbf {z}) ] ] - \frac {1}{K} \sum_ {j = 1} ^ {K} \mathbb {E} _ {\mathbf {z} \sim P _ {j} ^ {z}} [ L [ S (\mathbf {z}) ] ] \right| \quad (\text {b y E q . (1 0)}) (12) \\ = C \left| \mathbb {E} _ {\mathbf {z} \sim P _ {*} ^ {z}} \left[ \frac {L [ S (\mathbf {z}) ]}{C} \right] - \frac {1}{K} \sum_ {j = 1} ^ {K} \mathbb {E} _ {\mathbf {z} \sim P _ {j} ^ {z}} \left[ \frac {L [ S (\mathbf {z}) ]}{C} \right] \right| \quad (\text {b y A s s u m p t i o n B . 2}) (13) \\ \leq C \sup _ {0 \leq L / C \leq 1} \left| \mathbb {E} _ {\mathbf {z} \sim P _ {*} ^ {z}} \left[ \frac {L [ S (\mathbf {z}) ]}{C} \right] - \frac {1}{K} \sum_ {j = 1} ^ {K} \mathbb {E} _ {\mathbf {z} \sim P _ {j} ^ {z}} \left[ \frac {L [ S (\mathbf {z}) ]}{C} \right] \right| (14) \\ = C \left\| P _ {*} ^ {z} - \frac {1}{K} \sum_ {j = 1} ^ {K} P _ {j} ^ {z} \right\| _ {T V} \quad (\text {b y}) (15) \\ \end{array} +$$ + +This result suggests the generalization error of the loss function is bounded by the total variation distance between $P_{*}^{z}$ and $\frac{1}{K}\sum_{j = 1}^{K}P_{j}^{z}$ . The batch normalization re-calibrates all $P_{j}$ such that $P_{j}^{z}$ centers at the origin and has unit variance, making the distributions similar. Thus the total variation gets smaller after batch normalization, lowering the generalization error upper bound. + +The limitation of this analysis is we assume the batch statistics are population-truth moments (mean and variance) in $\mathbf{z}_i\stackrel {\mathrm{i.i.d.}}{\sim}P_j^z$ . So we cannot analyze the effects of the batch size $B$ during training and testing. That said, we provide empirical evaluations on different batch sizes $B$ at test time in Supp. I.1. + +# C Algorithm + +The training procedure of our approach is simple and similar to any stochastic gradient-based optimization. The only modification is to take into account the existence of a meta-training set. See Alg. 1. + +Algorithm 1: Training procedure of ACR +Input: $K$ interrelated training distributions $P_{1},\dots ,P_{K}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{Q}$ Mixing rate $\pi$ Deep anomaly detector model parameters $\theta$ Sub-sample size $M$ Mini-batch size $|\mathcal{B}|$ learning rate $\alpha$ Number of training iterations $T$ +Output: Optimized anomaly detector with parameter $\theta_T$ +1 Randomly initialize $\theta$ +2 Construct $P_1^\pi ,\dots ,P_K^\pi$ (Eq.(6)) +3 for iteration t in [1,..,T] do +4 Sample $M$ tasks $\{\mathbf{x}_{\mathcal{B}_m}\}_{m = 1}^M$ from all $K$ task distributions $\{P_1^\pi ,\dots ,P_K^\pi \}$ +5 $\theta_t\gets \theta_{t - 1} - \alpha \nabla_\theta \frac{1}{M}\sum_{m = 1}^M L(\mathbf{S}_{\theta_{t - 1}}(\mathbf{x}_{\mathcal{B}_m}))$ (Eq. (7)) +6 end + +![](images/fd4a8d07da4b771439d6ee82206032fa67c38c5c7f7458e5211a5b11e975ea23.jpg) +Figure 2: Illustration of batch normalization for AD with two tasks $P_1^\pi$ and $P_2^\pi$ . The method (batch-)normalizes the data in $P_j^\pi$ separately. If each $P_j^\pi$ consists mainly of normal samples, most samples will be shifted close to the origin (by subtracting the respective task's mean). As a result, the samples from all tasks concentrate around the origin in a joint feature space (gray area) and thus can be tightly enclosed using, e.g., one-class classification. Samples from the test task are batch normalized in the same way. + +# D Toy Example with Batch Normalization + +An important component of our method is batch normalization, which shifts and re-scales any data batch $\mathbf{x}_{\mathcal{B}}$ to have sample mean zero and variance one. Batch normalization also provides a basic parameter-free zero-shot batch-level anomaly detector (Eq. (4)). In Fig. 2, we show a 1D case of detecting anomalies in a mixture distribution. The mixture distribution composes of a normal data distribution (the major component) and an abnormal data distribution (the minor component). Eq. (4) adaptively detects anomalies at a batch level by shifting the normal data distribution toward the origin and pushing anomalies away. Setting a user-specified threshold allows making predictions. + +# E Baselines + +CLIP-AD [51]. CLIP (Contrastive Language-Image Pre-training [59]) is a pre-trained visual representation learning model that builds on open-source images and their natural language supervision signal. The resulting network projects visual images and language descriptions into the same feature space. The pre-trained model can provide meaningful representations for downstream tasks such as image classification and anomaly detection. When applying CLIP on zero-shot anomaly detection, CLIP prepares a pair of natural language descriptions for normal and abnormal data: $\{l_n = "A\text{photo of {NORMAL_CLASS}}\}$ , $l_a = "A\text{photo of something}"\}$ . The anomaly score of a test image $\pmb{x}$ is the relative distance between $\pmb{x}$ to $l_n$ and $\pmb{x}$ to $l_a$ in the feature space, + +$$ +s (\boldsymbol {x}) = \frac {\exp \left(\left\langle f _ {x} (\boldsymbol {x}) , f _ {l} \left(l _ {a}\right) \right\rangle\right)}{\sum_ {c \in \left\{l _ {n} , l _ {a} \right\}} \exp \left(\left\langle f _ {x} (\boldsymbol {x}) , f _ {l} (c) \right\rangle\right)}, +$$ + +where $f_{x}$ and $f_{l}$ are the CLIP image and description feature extractors and $\langle \cdot, \cdot \rangle$ is the inner product. We name this baseline CLIP-AD. + +Compared to our proposed method, CLIP-AD requires a meaningful language description for the image. However, this is not always feasible for all image datasets like Omniglot [41], where people can't name the written characters easily. In addition, CLIP-AD doesn't apply to other data types like tabular data or time-series data. Finally, CLIP-AD has limited ability to adapt to a different data distribution other than its training one. These limitations are demonstrated in our experiments. + +OC-MAML [22]. One-Class Model Agnostic Meta Learning (OC-MAML) is a meta-learning algorithm that taylors MAML [21] toward few-shot anomaly detection setup. OC-MAML learns a global model parameterization $\theta$ that can quickly adapt to unseen tasks with a few new task data points $S$ , called a support set. The new-task adaptation takes the global model parameters to a task-specific parameterization $\phi(\theta, S_t)$ that has a low loss $L(Q_t; \phi(\theta, S_t))$ on the new task $t$ , represented by another dataset $Q_t$ , called a query set. OC-MAML uses a one-class support set to update the model parameters $\theta$ with a few gradient steps to get $\phi$ . To learn an easy-to-adapt global parameterization $\theta$ , OC-MAML directly minimizes the target loss on lots of training tasks. Suppose there are $T$ tasks for training. The following loss function is minimized + +$$ +l (\theta) = \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} _ {S _ {t} \sim p _ {S} ^ {t}, Q _ {t} \sim p _ {Q} ^ {t}} [ L (Q _ {t}; \phi (\theta , S _ {t})) ], \tag {16} +$$ + +where $p_S^t$ is task $t$ 's support set distribution and $p_Q^t$ is the query set distribution. During training, the support set contains $K$ normal data points where $K$ is usually small, termed $K$ -shot OC-MAML. The query set contains an equal number of normal and abnormal data and provides optimization gradients for $\theta$ . During test time, OC-MAML adapts the global parameter $\theta$ on the unseen task's support set $S^*$ , resulting in a task-specific parameter $\phi(\theta, S^*)$ . The newly adapted parameters are then used for downstream tasks. + +OC-MAML is not a zero-shot anomaly detector and requires $K$ support data points to adapt compared to our method. Our method is simpler in training as it doesn't need to adapt to the support set with additional gradient updates, characterized in the function $\phi(\theta, S)$ . OC-MAML is also different in batch normalization. Rather than the original batch normalization, OC-MAML first computes the batch moments using the support set and then normalizes both the support and query set with the same moments. However, the computed moments can be noisy when the support set size is small. In our experiments, we adopt a 1-shot OC-MAML for all image data. + +ResNet152 [27]. Because batch normalization is an effective tool for zero-shot anomaly detection (see Fig. 1b), we directly apply batch normalization on extracted features from a pre-trained model. We then compute the anomaly score as the Euclidean distance between a feature vector and the origin in the feature space. Our experiments use a ResNet152 model pre-trained on ImageNet as a feature extractor and extract its 2048-dimension penultimate layer output as the final feature vector. Upon computing the features of an input batch through batch normalization layers in ResNet, two variants are available: using the batch statistics from training or re-computing the statistics of the test input batch itself. We name the former variant ResNet152-I and the latter ResNet152-II. Baseline ResNet152 doesn't optimize the feature extractor jointly with the zero-shot detection property of batch normalization. Hence the extracted pre-trained features are not optimal for the zero-shot AD. + +ADIB [14]. In addition to zero-shot and few-shot anomaly detectors, we also compare with the state-of-the-art deep anomaly detector ADIB [14] which use pre-trained image features and additional data for outlier exposure in training. We use a "debiased" subset of TinyImageNet as the outlier exposure data for CIFAR100 as suggested in Hendrycks et al. [30], use EMNIST [11] as the outlier exposure data for MNIST as suggested in Liznerski et al. [51], use OrganC and OrganS datasets [81] as outlier exposure data for OrganA, and use half of the training data as normal data and half of the training data as auxiliary outliers for Omniglot. + +# F Implementation Details + +Practical Training and Testing. On visual anomaly classification and tabular AD, we construct training and test distributions using labeled datasets8, where all $\mathbf{x}$ from the same class $j$ (e.g., all 0's in MNIST) are considered samples from the same $P_{j}$ . The dataset $\mathcal{Q}$ (e.g., MNIST as a whole) is the meta-set of all these distributions. + +For training and testing, we split the meta-dataset into disjoint subsets. In the MNIST example, we define $P_0, \dots, P_4$ as the distributions of images with digits 0 - 4 and use them for training. For testing, we select a single distribution of digits not seen during training (e.g., digit 5) as the "new normal" distribution $P_*$ to which we adapt the model. The remaining digits (6 - 9 in this example) are used as test-time anomalies. To reduce variance, we rotate the roles among digits 5 - 9, using each digit as a test distribution once. $^9$ + +# F.1 Implementation Details on Image Data for Anomaly Detection + +Hyperparameter Search. We search the hyperparameters on a validation set split from the training set, after which we integrate the validation set into the training set and train the model on that. Then we test the model on the test set. + +On CIFAR100-C, we construct the validation set on the training set of the primitive CIFAR100. We randomly select 20 classes as the validation set and set the remaining classes to be the training dataset at validation time. We search the neural network architecture (layers (3,4,5,6), number of convolutional kernels (32, 64, 128), and the output dimensions (8, 16, 32, 64, 128) while fixing the kernel size by $3 \times 3$ . For the learning rate, we search values 0.1, 0.01, 0.001, 0.0001, and 0.00001, after which we search finer values in a binary search fashion. We also search the mini-batch size $B$ (30, 60) and the number of sub-sampled tasks $M$ (16, 32, 64) at each iteration. We select the combination that trade-off the convergence speed and optimization stability. When selecting the anomaly ratio $\pi$ (Eq. (6)), we test 0.99, 0.95, 0.9, 0.8, 0.6 and find the results are quite robust to these values. So we fix $\pi = 0.8$ across the experiments. + +On non-natural image datasets (OrganA, Omniglot, and MNIST), we search hyperparameters on the validation set of Omniglot and use the searched hyperparameters on all datasets. Specifically, at validation time, we randomly split the Omniglot into 1200 classes for training and 423 classes for validation. After optimizing the hyperparameters, we constantly use the first 1200 classes for training and the remaining 423 classes for testing. The searched hyperparameters are the same as the ones in CIFAR100-C described above. + +Training Protocols. We train the model 6,000 iterations on CIFAR100 data, 10,000 iterations on Omiglot, and 2,000 iterations on MNIST and OrganA. Each iteration contains 32 training tasks; each task mini-batch has 30 (for datasets other than CIFAR100) or 60 (for CIFAR100) points sampled from $P_{j}^{0.8}$ . All 32 training tasks' gradients are averaged and incur one gradient update per iteration. + +ACR-DSVDD. We use the standard convolutional neural network architecture used in meta-learning. Specifically, the network contains four convolution layers. Each convolution layer is followed by a batch normalization layer and a ReLU activation layer. The final layer is a fully-connectly layer followed by a batch normalization layer. The center of DSVDD has the same dimension as the output of the fully-connected layer, which is 32. For CIFAR100/CIFAR100-C, each convolution layer has 128 kernels. For MNIST, Omniglot, and OrganA, each convolution layer has 64 kernels. Each kernel's size is $3 \times 3$ . We use Adam with a learning rate of 0.003 on CIFAR100 dataset and $1e - 4$ on all the other datasets. + +ACR-BCE. We use the same network structure as ACR-DSVDD without the final batch normalization layer and the center. The final fully-connected layer has output dimension of 1. We train the model with binary cross entropy loss. We use Adam with a learning rate of 0.003 on CIFAR100 dataset and $1e - 4$ on all the other datasets. + +# F.2 Implementation Details on MVtec AD for Anomaly Segmentation + +Training Protocols. Since the images are roughly aligned, we can use a sliding window over the image to detect local defects in each window. However, this requires pixel-level alignment and is unrealistic for the MVtec AD dataset. We instead first extract informative texture features using a sliding window, which corresponds to the 2D convolutions. The convolution kernel is instantiated with the ones in a pre-trained ResNet. We follow the same data pre-processing steps of Cohen and Hoshen [12], Defard et al. [15], Rippel et al. [62] to extract the texture representations (the third layer's output in our case) of WideResNet-50-2 pre-trained on ImageNet. Second, we detect anomalies in the extracted features in each sliding window position with our ACR method. Specifically, each window position corresponds to one image patch. We stack into a batch the patches taken from a set of images that all share the same spatial position in the image. For example, we may stack the top-left patch of all testing wood images into a batch and use ACR to detect anomalies in that batch. Finally, the window-wise anomaly scores are bilinearly interpolated to the size of the original image, i.e., the pixel-level anomaly scores. + +Following the same data pre-processing steps of Cohen and Hoshen [12], Defard et al. [15], Rippel et al. [62], we extract the texture representations (the third layer's output in our case) of WideResNet-50-2 pre-trained on ImageNet. After feature extraction, a batch of $B$ images leads to a representation of size $(B, C, H, W) \coloneqq (B, 1024, 14, 14)$ . These representations contain both textual and spatial information. During meta-training, we treat each spatial position as one new class so that there are $14 \times 14 = 196$ new classes for each original class (e.g., wood), and each new class contains $B$ data points, each a 1024 long vector. As a result, the model (DSVDD in our usage) takes a batch of vectors of size $(B, 1024)$ as input and assigns anomaly scores to each vector within the batch. When adding synthetic abnormal data (Eq. (6)), we use Gaussian noise corrupted input vectors rather than new class data, incorporating the fact that local defects result in similar feature vectors instead of globally different textures. Specifically, we add Gaussian noise sampled from $\mathcal{N}(0, 0.01I)$ and set $\pi = 0.5$ in Eq. (6). Since there is no During testing, we batch each position $(h, w)$ of all images and detect anomalies at position $(h, w)$ . Because the defects are local, and there is a possibility that there is no defect at some position $(h_0, w_0)$ , we manually add synthetic noisy vectors into the tested batch as the training procedure to ensure the images have a low anomaly score at $(h_0, w_0)$ . After getting anomaly scores, we remove the synthetic vector results, leading to scores of size $(B, 14, 14)$ , and then upscale the scores into the original image size by bilinear interpolation. We acknowledge that using CutPaste [45] to generate more realistic synthetic abnormal samples is another option. We leave the investigation to future work. + +ACR-DSVDD and Hyperparameter Search. Our model is a five-layer MLP with intermediate batch normalization layers and ReLU activations. The hidden sizes of the perceptrons are [512, 256, 128, 64, 32]. The center is size 32. The statistics of all batch normalization layers are computed on fly on the training/test batches. We average the gradients of 32 randomly sampled tasks for each parameter update. Each task contains 30 normal feature vectors and 30 noise-corrupted feature vectors. We train the model with Meta Outlier Exposure loss. We set the learning rate 0.0003 and iterate 50 updates for each class. We search the hyperparameters on a test subset of bottle class (half of the original test set) and apply the same hyperparameters to all classes afterward. + +# F.3 Implementation Details on Tabular Data + +ACR-NTL has the same model architecture as the baseline NTL, and ACR-DSVDD adds one additional batch normalization layer on top of the DSVDD baseline. Our algorithm is applicable to the existing backbone models without complex modifications. + +ACR-DSVDD. ACR is applied to the backbone model DSVDD [63]. The neural network of DSVDD is a four-layer MLP with intermediate batch normalization layers and ReLU activations. The hidden sizes on Anoshift dataset are [128, 128, 128, 32]. The hidden sizes on Malware dataset are [64, 64, 64, 32]. One batch normalization layer is added on the top of the network on Anoshift experiment. The statistics of all batch normalization layers are computed on fly on the training/test batches. We use Adam with a learning rate of $4e - 4$ on Anoshift dataset and $1e - 4$ on Malware dataset. + +ACR-NTL. ACR is applied to the backbone model NTL [56]. The shared encoder of NTL is a four-layer MLP with intermediate batch normalization layers and ReLU activations. The hidden sizes of the encoder are [128, 128, 128, 32]. The statistics of all batch normalization layers are computed on fly on the training/test batches. We set the number of neural transformations as 19. Each neural transformation is parametrized by a three-layer MLP of hidden size of 128 with ReLU activations. All networks are optimized jointly with Adam with a learning rate of $4e - 4$ . + +# G Meta Outlier Exposure Avoids Trivial Solutions. + +The benefit of the outlier exposure loss in meta-training is that the learning algorithm cannot simply learn a model on the average data distribution, i.e., without learning to adapt. This failure to adapt is a common problem in meta-learning. Our solution relies on using each training sample $\mathbf{x}_i$ in different contexts: depending on the sign of $y_{i,j}$ , data point $\mathbf{x}_i$ is considered normal (when drawn from $P_j$ ) or anomalous (when drawn from $\bar{P}_j$ ). This ambiguity prevents the model from learning an average model over the meta data set and forces it to adapt to individual distributions instead. + +For example, DSVDD with its original loss function may suffer from a trivial solution that maps any input data to the origin in the feature space and achieves the optimal zero loss [63]. This trivial solution is also possible in our proposed meta-training procedure. But Meta Outlier Exposure gets rid of this trivial solution because mapping everything to the origin incurs an infinite loss on $\mathbf{A}_{\theta}$ . Similar reasoning also applies to binary cross entropy loss. + +# H Connections to Other Areas + +Our problem setup and assumptions share similarities with other research areas but differences are also pronounced. + +Connection to Batch Normalization-Based Test-time Adaptation (TTA). Many works for TTA feature batch-level predictions [10, 49, 53, 67, 76] assumes its test-time data are corrupted but from the same semantic classes as the training data, but the zero-shot AD's test data can be drawn from a completely new class. + +Connection to Unsupervised Domain Adaptation (UDA). Although our approach uses unlabeled data like the UDA setting [38], UDA assumes the unlabeled data from the shifted domain is available during training and can be used to update the model parameters. But our method doesn't rely on the availability of novel data during training time and doesn't require updating the model parameters during test time. + +Connection to Zero-shot Classification. Xian et al. [79] explains the nature of zero-shot classification and writes that "the crux of the matter for all zero-shot learning methods is to associate observed and non-observed classes through some form of auxiliary information which encodes visually distinguishing properties of objects." The auxiliary information demands extra human annotations like picture attributes. In contrast, our method assumes a batch of test data without human annotations. The test distribution information is automatically contained in the batch statistics. + +Connection to Meta-learning. Although we assume that there is a meta-training dataset available like meta-learning, we don't require a support set for updating the model during both training time and test time. The presence of a support set differentiates our method from meta-learning in many aspects. First, for the most well-known technique (MAML) in meta-learning, training requires second-order derivative information of the support set loss function, which is computationally expensive and slows the optimization. Second, it is unclear how to select the support set size for adaptation. Sometimes, it may require a large support set to achieve good adaptations. For example, OC-MAML needs at least a 10-shot support set to perform on par with our method on CIFAR100-C; Third, the support set requires labeled data. This already adds burdens to practitioners. Fourth, the model parameter updates require additional maintenance and extra cost during testing. + +Connection to Contextual AD. Contextual AD considers a changing notion of normality based on context [25, 74]. In contextual AD, the training and testing data are from the same data generating process, which involves (hidden or observed) contextual variables controlling the generation. This is different from our setup. We tackle the problem when the training and testing data are from different data generating processes. + +Table 4: AUC (%) with standard deviation for anomaly detection on CIFAR100-C and Omniglot. As an ablation, rather than utilizing outlier exposure, we trained Zero-shot BN only on normal data of each task. + +
CIFAR100-C (Gaussian Noise)Omniglot
1%5%10%20%5%10%20%
One-class loss +(data-adapted) ResNet15272.2±2.273.9±1.474.2±0.973.8±0.396.2±1.096.4±0.896.2±0.8
70.9±2.267.6±0.267.0±0.764.9±0.599.2±0.299.1±0.199.0±0.1
+ +Table 5: The effects of batch normalization for zero-shot AD. The first two columns show different combinations of batch normalization usage during training and testing. The third column answers the question whether the type of batchnorm usage works for zero-shot AD. + +
BatchNorm (train)BatchNorm (test)Work?
Yes
No
No
No
+ +Table 6: The effects of test time batch size on the results of zero-shot AD. We report the test results in AUC when the contamination ratio is set to $5\%$ and $10\%$ . The studies are conducted on the Gaussian noise version of CIFAR-100C. On the extreme batch sizes, each batch contains one anomaly. + +
Batch size361116
One anomaly66.4±2.377.9±2.882.3±2.784.8±2.0
Batch size20406080100
5%83.7±1.985.3±1.185.6±1.285.9±0.885.6±0.7
10%84.5±1.586.1±0.885.7±0.785.8±0.585.8±0.6
+ +Table 7: The effects of the number of classes used in training on zero-shot AD. We report the test results in AUC when the contamination ratio is fixed to $10\%$ . The studies are conducted on the Omniglot dataset. + +
#Training classes1251015
AUROC59.0±0.671.8±0.672.5±0.372.2±1.075.3±0.4
#Training classes2040801603206401200
AUROC79.0±1.090.5±0.595.3±0.297.6±0.298.1±0.298.4±0.199.1±0.2
+ +# I Additional Results + +# I.1 Ablation Study + +Training with Different Losses. We study the benefits of using meta outlier exposure in Eq. (5) and compare to a) using one-class classification loss $L[\mathbf{S}_{\theta}(\mathbf{x}_{\mathcal{B}})] = \frac{1}{B}\sum_{i\in \mathcal{B}}\mathbf{S}_{\theta}^{i}(\mathbf{x}_{\mathcal{B}})$ with $\mathbf{S}_{\theta}^{i}(\mathbf{x}_{\mathcal{B}}) = ||\phi_{\theta}^{i}(\mathbf{x}_{\mathcal{B}}) - \mathbf{c}||^{2}$ where $\phi_{\theta}$ is the feature map, b) (data-adapted) ResNet152. The data-adapted ResNet152 first learns the features by performing a multi-class classification task with the meta-training set. Then a batch normalization layer is applied on the top of penultimate layer representations for zero-shot anomaly detection. We train a 100-class classifier for CIFAR100C and Omniglot separately. Note that for Omniglot, we randomly sub-sample 100 classes from its 1400 training classes and train the classifier. From the results in Tab. 4 we can see that both ablations perform competitive with ACR on the simple Omniglot dataset, but perform much worse compared to ACR on the complex CIFAR100-C dataset. In conclusion, using meta outlier exposure in training is favorable. + +Training or Testing Without BatchNorm. We investigate whether training or testing without batch normalization works for zero-shot AD or not. To this end, we employ four different combinations + +of batch normalization usage during training and testing and check which combination works and which doesn't. We trained the models with the same meta-training procedure as what we used in Sec. 4.1.1 and tested on CIFAR100-C and Omniglot. We present the results in Tab. 5. In the third column, "Yes" indicates the AUROC metric is significantly larger than 0.5, and therefore learns a meaningful zero-shot AD model; "No" indicates the AUROC performance is around 0.5, which means the predicted anomaly scores are just random guesses and the model cannot be used for zero-shot AD. Tab. 5 shows that only when the batch normalization is used both in training and testing, the zero-shot AD works. Otherwise, the meta-training procedure couldn't result in meaningful zero-shot AD representations. + +Moreover, for the DSVDD model, we can theoretically show that training without batch normalization will not work with meta outlier exposure: the optimal loss function has nothing to do with zero-shot AD. Rather, the optimal loss is only related to the mixture weight $\pi$ during training. Without loss of generality, suppose we have two training distributions $P_{1}, P_{2}$ . We learn a Deep SVDD model parameterized by $\theta$ and $c$ by the meta outlier exposure method, + +$$ +\begin{array}{l} l (\theta , c) \\ = \mathbb {E} _ {x \sim P _ {1} ^ {\pi}} \left[ (1 - y _ {1}) (f _ {\theta} (x) - c) ^ {2} + \frac {y _ {1}}{(f _ {\theta} (x) - c) ^ {2}} \right] + \mathbb {E} _ {x \sim P _ {2} ^ {\pi}} \left[ (1 - y _ {2}) (f _ {\theta} (x) - c) ^ {2} + \frac {y _ {2}}{(f _ {\theta} (x) - c) ^ {2}} \right] \\ = \mathbb {E} _ {x _ {1} \sim P _ {1}, x _ {2} \sim P _ {2}} \left[ \pi \left(f _ {\theta} \left(x _ {1}\right) - c\right) ^ {2} + \frac {1 - \pi}{\left(f _ {\theta} \left(x _ {2}\right) - c\right) ^ {2}} + \pi \left(f _ {\theta} \left(x _ {2}\right) - c\right) ^ {2} + \frac {1 - \pi}{\left(f _ {\theta} \left(x _ {1}\right) - c\right) ^ {2}} \right] \tag {17} \\ = \sum_ {i = 1} ^ {2} \mathbb {E} _ {x _ {i} \sim P _ {i}} \left[ \pi \left(f _ {\theta} \left(x _ {i}\right) - c\right) ^ {2} + \frac {1 - \pi}{\left(f _ {\theta} \left(x _ {i}\right) - c\right) ^ {2}} \right] \\ \geq 4 \sqrt {\pi (1 - \pi)} \\ \end{array} +$$ + +where $0.5 < \pi < 1$ implies the majority assumption and the equality holds when the model parameters $(\theta$ and $c)$ are tuned such that $\left(f_{\theta}(x_i) - c\right)^2 = \sqrt{(1 - \pi)/\pi}$ for any $x_i$ . All data points will be put at the hypersphere's surface centered around $c$ with a radius $\sqrt{(1 - \pi)/\pi}$ in the feature space when the model is trivially optimized. However, the optimal loss has nothing to do with distinguishing different distribution's input data $x$ in the feature space, which is unlikely to produce useful representations for zero-shot AD. + +On the other hand, if we apply batch normalization in the model $f_{\theta}$ , we will not have the optimal loss function irrelevant to distributions. To see this, note that batch normalization will shift the input toward the origin. Thus $x_{1}$ in training task $P_{1}^{\pi}$ and $x_{2}$ in training task $P_{2}^{\pi}$ should have similar representations as they both take the majority in each task. Similarly, $x_{2}$ in task $P_{1}^{\pi}$ and $x_{1}$ in task $P_{2}^{\pi}$ are minorities, thus mapped far away from the origin in the feature space. Therefore, the symmetry breaks in Eq. (17), and the above trivial optimal loss disappears. + +Ablation Study on Batch Sizes. To test the batch size effects, we add an ablation study on the Gaussian noise version of CIFAR-100C where we fix the anomaly ratio as $5\%$ or $10\%$ and try different batch sizes. The results are summarized in Tab. 6. It shows that larger batch sizes lead to more stable results. The performance is similar when the batch size is larger than or equal to 40. Even with a batch size being 20, our results are still better than the best-performing baseline. + +We also test extreme batch sizes being 3, 6, 11, and 16 where each batch contains one anomaly. + +Ablation Study on Number of Training Classes. To analyze the effect of the number of training distributions on the zero-shot AD performance, we conducted experiments on Omniglot where we varied the number of available meta-training classes from 1, 2, 5, 10, 15, 20, 40, 80 to 640, 1200. We separately trained ACR-DSVDD on each setup and tested the resulting models on the test set that has a $10\%$ ground-truth anomaly ratio. We repeated 5 runs of the experiment with random initialization and reported the AUROC results in Tab. 7. It shows that using 320 available classes for this dataset is sufficient to achieve a decent zero-shot AD performance. The results also demonstrate that even though we have only one class in the meta-training set, thanks to the batch norm adaptation, we can still get better-than-random zero-shot AD performance. + +Ablation Study on Other Normalization Techniques. As follows, we report new experiments involving LayerNorm, InstanceNorm, and GroupNorm for zero-shot AD. + +We stress that, while these methods may have overall benefits in terms of enhancing performance, they do not work in isolation in our zero-shot AD setup. A crucial difference between these methods and batch normalization is that they treat each observation individually, rather than computing normalization statistics across a batch of observations. However, sharing information across the batch (and this way implicitly learning about distribution-level information) was crucial for our method to work. + +Our experiments (AUROC results in the table below) with DSVDD on the Omniglot dataset support this reasoning. Using these normalization layers in isolations yields to random outcomes $(\mathrm{AUROC} = 50)$ : + +
LayerNormInstanceNormGroupNorm
50.0±0.950.6±0.750.2±0.5
+ +We also added a version of the experiment where we combined these methods with batch normalization in the final layer. The results dramatically improve in this case: + +
BatchNorm (BN)LayerNorm + BNInstanceNorm + BNGroupNorm + BN
99.1±0.298.8±0.198.8±0.298.2±0.2
+ +Experimental details: We use DSVDD as the anomaly detector and experiment on the Omniglot dataset. Each nonlinear layer of the feature extractor for DSVDD is followed by the respective normalization layer. We apply the same training protocol as Tab. 9 in the paper. For GroupNorm, we separate the channels into two groups wherever we apply group normalization. + +Effects of BatchNorm (BN) Layer Position. We conducted additional experiments on two visual anomaly detection tasks – anomaly segmentation on the MVTec-AD dataset and object-level AD on CIFAR100C. We used the same DSVDD model architectures as used in Tables 1 and 2 as the backbone model, except that we switched off BN in all but one layer. For anomaly segmentation, there are five possible BN layer positions; and there are four positions for the object-level AD model. We switched off the BN layers in all but one position and then re-trained and tested the model with the same protocol used in our main paper (For CIFAR100C, we tested the model with the test data anomaly ratio of $10\%$ ). We iterate this procedure across all available BN layer positions. We repeat every experiment with different random seeds five times and report the mean AUROC and standard deviation. The results are summarized in the tables below, where a smaller value of the BN position corresponds to earlier layers (close to the input), and a larger value corresponds to later layers close to the output. The final column is copied from our results in Tables 1 and 2 where BN layers are on all available positions. For both MVTec-AD and CIFAR100C, we average the performance across all test classes. + +Results on the two tasks have opposite trends regarding the effects of BN layer positions. Specifically, for anomaly segmentation on MVTec-AD, earlier BN layers are more effective, while for AD on CIFAR100C, later BN layers are more effective. This observation can be explained by the fact that anomaly segmentation is more sensitive to low-level features, while object-level AD is more sensitive to global feature representations. In addition, compared to the results in Tables 1 and 2 (copied to the last column in the table below), our results suggest that using BN layers at multiple positions does help re-calibrate the data batches of different distributions from low-level features (early layers) to high-level features (late layers) and shows performance improvement over a single BN layer. + +MVTec-AD + +
BN Position12345(1,2,3,4,5)
Pixel-level80.8±1.969.6±1.473.9±0.963.6±1.660.9±0.892.5±0.2
Image-level74.7±0.959.2±1.663.6±1.365.5±1.265.4±1.385.8±0.6
+ +CIFAR100C + +
BN Position1234(1,2,3,4)
AUROC61.4±0.561.0±0.968.2±0.968.9±1.185.9±0.4
+ +![](images/1d12296832618dcfdec121e84cff05742c594e2aea14a62c743554daf487f0e2.jpg) +Figure 3: 2D visualization (after PCA) of the adaptively centered representations for two test tasks in the Omniglot dataset. The same learned DSVDD model adapts with our proposed method and maps samples from the majority class (class 1 (left) and class 2 (right)) to the same center in the embedding space in both tasks. + +![](images/00229f23502e950ebf9f7a01dd3540ae1d76c91a44cd35d07ac93f4be413dbd1.jpg) + +Robustness of the mixing hyperparameter $\pi$ in Eq. (6). We conduct the following experiments with varying $\pi$ . The experiment has the same setup as Table 1 on CIFAR100C with a testing anomaly ratio of 0.1. The results show that all tested $\pi$ 's results are over $84\%$ AUC. + +CIFAR100C + +
π0.990.950.90.80.6
AUROC85.8±0.585.4±0.584.1±0.485.9±0.484.4±0.6
+ +# I.2 Visualization of ACR. + +We provide a visualization of the learned representations from DSVDD on the Omniglot dataset as qualitative evidence in Fig. 3. We observe that even though the normal and abnormal data classes flip in two plots, the model learns to center the samples from the majority class and map the samples from the minority class away to the center in the embedding space. In conclusion, ACR is an easy-to-use zero-shot AD method and achieves superior zero-shot AD results on different types of images. The performance of ACR is also robust against the test anomaly ratios. + +# I.3 Additional Results on CIFAR100-C + +We test all methods on all corruption types of CIFAR100-C. The results are presented in Tab. 8. + +# I.4 Additional Results on Non-natural Images + +Datasets. We further evaluate the methods on two other non-natural datasets—MNIST and Omniglot of hand-written characters. MNIST uses the same split and evaluation protocol as OrganA. On Omniglot, we take the first 1200 classes to form the meta-training set and use the remaining unseen 423 classes for testing. + +Results. We present the results in Tab. 9. It shows that our approach significantly outperforms all the other baselines by a large margin on both datasets. + +# I.5 Class-wise Results on MVtec-AD + +We present the class-wise results in Tab. 10 for finer comparisons with other methods on MVtec AD benchmark. + +Besides, we also implement the synthetic anomalies using images from different distributions during training. The result is worse than the model using Gaussian corrupt noise as example anomalies but still better than existing works in anomaly segmentation. We summarize the results in Tab. 11, which suggests that using images from different distributions as example anomalies during training is helpful for anomaly segmentation on MVtec-AD. + +# I.6 Additional Results on Malware + +Dataset. Malware [34] is a dataset of malicious and benign computer programs, collected from 11/2010 to 07/2014. Malware attacks are designed adversarially, thus leading to shifts in both normal + +and abnormal data. We adopt the data reader from Li et al. [43]. We follow the preprocessing of [34] and convert the real-valued probabilities $p$ of being malware to binary labels (labeled one if $p > 0.6$ and zero if $p < 0.4$ ). The samples with probabilities between 0.4 and 0.6 are discarded. The model is trained on normal samples collected from 01/2011 to 12/2013, validated on normal and abnormal samples from 11/2010 to 12/2010, and tested on normal and abnormal samples from 01/2014 to 07/2014 (the anomaly ratios vary between $1\%$ and $20\%$ ). + +Results. We report the results on Malware in Tab. 12. ACR-NTL achieves the best results under all anomaly ratios. All baselines except ICL perform worse than random guessing, meaning that the malware successfully fools most baselines, which testifies to the adversarial-upgrade explanation in the main paper. + +Table 8: AUC (%) with standard deviation for anomaly detection on CIFAR100-C [29]. + +
Noise TypeMethod1%5%10%20%
gaussian noiseACR-DSVDD87.7±1.486.3±0.985.9±0.485.6±0.4
ACR-BCE84.3±2.286.0±0.386.0±0.285.7±0.4
ResNet152-I75.6±2.373.2±1.373.2±0.869.9±0.6
ResNet152-II62.5±3.161.8±1.761.2±0.660.2±0.4
OC-MAML (1-shot)53.0±3.654.1±1.955.8±0.657.1±1.0
CLIP-AD82.3±1.182.6±0.982.3±0.982.6±0.1
shot noiseACR-DSVDD85.5±1.686.5±0.287.3±0.686.4±0.4
ACR-BCE87.1±2.486.3±0.686.8±0.586.4±0.1
ResNet152-I76.9±2.375.7±0.774.3±0.671.9±0.6
ResNet152-II59.7±2.060.9±1.461.0±0.660.1±0.6
OC-MAML (1-shot)53.8±4.752.8±1.153.6±1.053.8±1.3
CLIP-AD83.0±1.684.1±0.383.9±0.583.3±0.3
impulse noiseACR-DSVDD80.5±3.781.5±0.580.7±0.779.8±0.2
ACR-BCE81.7±1.081.0±0.580.8±0.779.5±0.3
ResNet152-I74.1±1.473.1±1.072.2±0.471.9±0.3
ResNet152-II64.3±2.763.0±1.262.2±0.861.2±0.6
OC-MAML (1-shot)53.6±2.554.8±1.653.6±1.153.8±0.9
CLIP-AD81.5±2.082.7±0.482.3±0.582.2±0.2
speckle noiseACR-DSVDD86.5±2.085.8±0.886.0±0.485.1±0.2
ACR-BCE85.9±1.786.4±0.485.7±0.685.4±0.4
ResNet152-I75.8±2.875.8±0.475.1±0.472.9±0.5
ResNet152-II61.8±2.861.0±1.061.0±0.959.8±0.3
OC-MAML (1-shot)52.2±2.752.8±1.253.5±1.253.7±0.4
CLIP-AD84.6±1.683.7±0.484.1±0.484.2±0.3
gaussian blurACR-DSVDD88.5±1.188.5±0.788.7±0.488.6±0.3
ACR-BCE85.6±1.385.0±0.685.0±0.984.7±0.5
ResNet152-I85.2±1.583.7±1.082.9±0.780.9±0.3
ResNet152-II64.9±1.565.3±1.264.0±0.962.7±0.4
OC-MAML (1-shot)55.6±3.656.6±0.656.8±1.157.6±0.6
CLIP-AD91.9±0.892.7±0.592.1±0.592.3±0.2
defocus blurACR-DSVDD89.7±1.889.5±0.889.1±0.389.2±0.3
ACR-BCE86.5±1.386.5±0.686.3±0.385.9±0.4
ResNet152-I85.2±1.583.7±1.082.9±0.780.9±0.3
ResNet152-II64.9±1.565.3±1.264.0±0.962.7±0.4
OC-MAML(1-shot)53.5±2.551.7±1.754.0±1.854.7±0.7
CLIP-AD93.1±1.492.9±0.392.8±0.392.8±0.2
glass blurACR-DSVDD87.0±2.187.9±0.487.7±0.487.6±0.2
ACR-BCE85.4±1.086.1±0.486.4±0.486.1±0.3
ResNet152-I80.3±2.578.7±0.878.0±0.675.6±0.4
ResNet152-II63.9±2.263.0±1.763.6±0.562.2±0.4
OC-MAML (1-shot)52.8±1.953.1±1.753.9±0.953.7±1.4
CLIP-AD85.4±0.585.0±1.184.2±0.784.4±0.3
motion blurACR-DSVDD89.2±0.489.6±0.889.1±0.588.6±0.5
ACR-BCE86.3±1.985.3±1.085.7±0.284.9±0.2
ResNet152-I84.3±1.383.4±1.382.0±0.580.4±0.3
ResNet152-II66.6±3.164.8±1.263.4±0.662.4±0.3
OC-MAML (1-shot)50.5±3.152.3±1.653.1±0.953.6±0.7
CLIP-AD91.8±1.492.9±0.492.7±0.392.8±0.3
zoom blurACR-DSVDD90.3±1.789.6±0.789.8±0.489.4±0.3
ACR-BCE87.5±1.886.5±1.086.4±0.286.4±0.3
ResNet152-I86.9±1.387.2±0.386.3±0.384.6±0.2
ResNet152-II65.2±1.165.7±0.766.1±0.364.2±0.4
OC-MAML (1-shot)50.6±3.253.8±0.853.7±1.454.2±0.4
CLIP-AD94.2±1.494.4±0.394.3±0.393.9±0.3
snowACR-DSVDD87.7±1.287.7±1.087.6±0.487.4±0.3
ACR-BCE84.4±2.685.5±0.885.5±0.684.4±0.0
ResNet152-I85.8±1.484.8±0.683.7±0.881.9±0.3
ResNet152-II67.1±1.965.6±0.964.5±0.463.3±0.8
OC-MAML (1-shot)56.7±4.554.5±1.856.8±0.657.3±0.2
CLIP-AD91.7±0.892.9±0.493.3±0.293.2±0.2
fog blurACR-DSVDD86.2±1.885.2±0.685.4±0.985.0±0.2
ACR-BCE78.8±2.777.7±0.577.3±0.777.2±0.6
ResNet152-I76.4±1.876.9±0.674.8±1.073.0±0.9
ResNet152-II64.5±2.162.9±0.862.5±0.461.0±0.5
OC-MAML (1-shot)51.9±3.652.9±0.953.4±0.653.7±0.2
CLIP-AD91.9±0.892.3±0.592.2±0.492.3±0.3
frostACR-DSVDD88.2±1.588.0±0.987.4±0.687.2±0.3
ACR-BCE83.2±1.484.1±1.284.6±0.683.7±0.4
ResNet152-I85.9±1.685.5±0.583.8±0.881.4±0.5
ResNet152-II63.0±1.063.2±0.562.7±1.361.7±0.3
OC-MAML (1-shot)52.8±1.352.4±2.053.6±0.753.2±1.1
CLIP-AD92.9±0.693.1±0.293.6±0.393.2±0.2
brightnessACR-DSVDD90.0±1.589.5±0.989.6±0.489.9±0.2
ACR-BCE86.7±1.387.8±0.787.1±0.887.2±0.4
ResNet152-I90.7±0.989.8±0.589.7±0.388.1±0.3
ResNet152-II67.6±2.169.8±0.468.2±1.067.0±0.5
OC-MAML (1-shot)53.6±1.156.8±1.556.2±0.756.8±0.5
CLIP-AD94.6±0.495.6±0.395.4±0.395.3±0.2
spatterACR-DSVDD88.1±1.589.2±0.689.0±0.688.7±0.1
ACR-BCE86.2±2.387.7±0.387.2±0.687.3±0.3
ResNet152-I76.1±1.677.0±0.875.2±0.373.5±0.2
ResNet152-II61.3±0.961.3±1.260.2±0.559.3±0.5
OC-MAML (1-shot)54.6±3.754.0±0.353.1±1.254.1±1.0
CLIP-AD89.3±1.888.9±0.588.3±0.488.8±0.2
elastic transformACR-DSVDD90.8±1.989.3±0.790.0±0.489.3±0.3
ACR-BCE87.6±1.086.7±0.887.4±0.687.2±0.4
ResNet152-I82.4±2.480.9±0.480.0±0.978.4±0.2
ResNet152-II65.6±2.365.2±0.663.9±0.762.0±0.3
OC-MAML (1-shot)52.5±3.954.3±1.254.1±1.254.7±0.8
CLIP-AD91.7±0.590.0±0.389.4±0.589.4±0.3
pixelateACR-DSVDD91.7±0.591.1±0.690.8±0.690.7±0.2
ACR-BCE89.6±1.989.9±0.689.7±0.189.8±0.3
ResNet152-I82.5±1.683.2±1.382.5±0.780.1±0.3
ResNet152-II66.4±1.565.6±0.664.9±0.663.8±0.3
OC-MAML (1-shot)56.4±3.855.8±0.956.4±0.757.0±0.9
CLIP-AD89.8±1.987.7±0.788.3±0.388.5±0.3
+ +Table 9: AUC (%) with standard deviation for anomaly detection on non-natural images: Omniglot, MNIST, and OrganA. ACR with both backbone models outperforms all baselines on all datasets. In comparison, CLIP-AD performs much worse on non-natural images. + +
MNISTOmniglot
1%5%10%5%10%20%
ADIB [14]50.4±2.049.4±1.749.4±2.050.8±1.749.5±0.649.7±0.4
ResNet152-I [27]87.2±1.384.2±0.280.9±0.296.4±0.495.5±0.394.3±0.2
ResNet152-II [27]80.0±1.978.4±1.574.9±0.388.1±0.886.7±0.584.4±0.6
OC-MAML [22]83.7±3.586.0±2.386.4±2.898.6±0.398.4±0.298.5±0.1
CLIP-AD [51]53.9±1.453.7±0.953.9±0.8N/AN/AN/A
ACR-DSVDD91.9±0.890.4±0.288.8±0.299.1±0.299.1±0.299.2±0.0
ACR-BCE88.7±0.687.8±0.486.5±0.398.5±0.298.9±0.199.1±0.1
+ +Table 10: ACR-DSVDD's pixel-level (segmentation) and image-level (classification) AUCROCs (%) on MVTec-AD. + +
Pixel-levelImage-level
Bottle95.8±0.599.5±0.2
Cable87.1±1.172.2±1.9
Capsule95.2±0.978.1±1.2
Carpet97.8±0.499.8±0.2
Grid90.4±1.483.8±3.0
Hazelnut92.3±0.779.2±0.9
Leather98.6±0.1100.0±0.0
Metal-nut79.5±2.575.6±5.4
Pill93.0±1.372.2±3.2
Screw86.7±0.948.5±3.1
Tile90.3±0.998.4±0.3
Toothbrush98.2±0.197.0±0.9
Transistor97.2±0.193.1±1.1
Wood89.2±1.198.2±0.8
Zipper95.8±0.990.7±0.9
Average92.5±0.285.8±0.6
+ +Table 11: ACR-DSVDD's pixel-level (segmentation) and image-level (classification) AUCROCs (%) on MVTec-AD. The model uses images from other classes as synthetic anomalies during training. + +
Pixel-levelImage-level
Bottle94.598.6
Cable88.164.5
Capsule90.170.8
Carpet97.599.5
Grid74.897.9
Hazelnut84.361.9
Leather97.599.1
Metal-nut67.554.2
Pill89.066.8
Screw76.653.2
Tile90.097.6
Toothbrush93.580.8
Transistor95.291.4
Wood88.197.2
Zipper78.679.7
Average87.078.8
+ +Table 12: AUC (%) with standard deviation for anomaly detection on Malware [34]. ACR-NTL achieves the best results on various anomaly ratios. + +
1%5%10%20%
OC-SVM19.5±5.620.5±1.420.3±0.920.3±0.8
IForest22.8±2.922.9±1.223.3±0.623.4±0.8
LOF22.3±4.923.2±1.823.3±1.323.2±0.4
KNN21.6±6.322.5±1.622.7±0.922.6±0.9
DSVDD25.4±3.327.4±1.728.9±0.928.3±0.8
AE48.8±2.449.1±1.249.4±0.649.3±0.5
LUNAR23.1±4.523.8±1.224.1±0.724.2±0.6
ICL83.5±1.981.0±1.082.9±0.883.1±0.9
NTL25.9±4.825.4±1.324.5±1.325.0±0.8
ACR-DSVDD73.1±2.869.5±3.369.4±3.366.4±4.0
ACR-NTL85.0±1.384.5±0.885.1±1.284.0±0.8
\ No newline at end of file diff --git a/zeroshotanomalydetectionviabatchnormalization/images.zip b/zeroshotanomalydetectionviabatchnormalization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f1c8d33f9c9320ed3c7dc9be2c82432f8ab6a15e --- /dev/null +++ b/zeroshotanomalydetectionviabatchnormalization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd8030de0a03bf629a66cb9e91a335dc6b2007fc504be89305d1f750f23ac526 +size 986648 diff --git a/zeroshotanomalydetectionviabatchnormalization/layout.json b/zeroshotanomalydetectionviabatchnormalization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..73d8b18a40afe9edd35a3d325dc92c3b03c8e2f0 --- /dev/null +++ b/zeroshotanomalydetectionviabatchnormalization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b14d87009f806ee468afa57e3c0b98ea225aa3b647c39ee306e116e41f3841b9 +size 864725 diff --git a/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_content_list.json b/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9e76b57dceedc95c3186e9111b7e533186833696 --- /dev/null +++ b/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b903752c159b2be7ca3902b3eb2295d7f691bb7718b5de22300f918d927b430c +size 240366 diff --git a/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_model.json b/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7f3d2557a0aa8e8eeb6286e50728672e3588f1bd --- /dev/null +++ b/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b350b22b1d19c22e0f001285c9681fb759d5554c762361888f7aa05f59367d83 +size 280616 diff --git a/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_origin.pdf b/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1d66486c2cb5db5115e727eaa9e6e383c85091b6 --- /dev/null +++ b/zeroshotcausallearning/e4c409ef-d3e9-40e4-be77-15a5dfbbf5f6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70c683e027f61e1649510183edaa3814d13753d738c30602726aed9bef754208 +size 1641488 diff --git a/zeroshotcausallearning/full.md b/zeroshotcausallearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fc97fbbffd944c4b3a8c50fab398af45586d6b0b --- /dev/null +++ b/zeroshotcausallearning/full.md @@ -0,0 +1,987 @@ +# Zero-shot causal learning + +Hamed Nilforoshan\*1 Michael Moor\*1 Yusuf Roohani2 Yining Chen1 Anja Šurina3 Michihiro Yasunaga1 Sara Oblak4 Jure Leskovec1 + +$^{1}$ Department of Computer Science, Stanford University +$^{2}$ Department of Biomedical Data Science, Stanford University +$^{3}$ School of Computer and Communication Sciences, EPFL +$^{4}$ Department of Computer Science, University of Ljubljana + +Correspondence to: hamedn@cs.stanford.edu, mdmoor@cs.stanford.edu, jure@cs.stanford.edu + +# Abstract + +Predicting how different interventions will causally affect a specific individual is important in a variety of domains such as personalized medicine, public policy, and online marketing. There are a large number of methods to predict the effect of an existing intervention based on historical data from individuals who received it. However, in many settings it is important to predict the effects of novel interventions (e.g., a newly invented drug), which these methods do not address. Here, we consider zero-shot causal learning: predicting the personalized effects of a novel intervention. We propose CaML, a causal meta-learning framework which formulates the personalized prediction of each intervention's effect as a task. CaML trains a single meta-model across thousands of tasks, each constructed by sampling an intervention, its recipients, and its nonrecipients. By leveraging both intervention information (e.g., a drug's attributes) and individual features (e.g., a patient's history), CaML is able to predict the personalized effects of novel interventions that do not exist at the time of training. Experimental results on real world datasets in large-scale medical claims and cell-line perturbations demonstrate the effectiveness of our approach. Most strikingly, CaML's zero-shot predictions outperform even strong baselines trained directly on data from the test interventions. + +# 1 Introduction + +Personalized predictions about how an intervention will causally affect a specific individual are important across many high impact applications in the physical, life, and social sciences. For instance, consider a doctor deciding whether or not to prescribe a drug to a patient. Depending on the patient, the same drug could either (a) cure the disease, (b) have no effect, or (c) elicit a life-threatening adverse reaction. Predicting which effect the drug will have for each patient could revolutionize healthcare by enabling personalized treatments for each patient. + +The causal inference literature formalizes this problem as conditional average treatment effects (CATE) estimation, in which the goal is to predict the effect of an intervention, conditioned on patient characteristics $(X)$ . When natural experiment data is available, consisting of individuals who already did and did not receive an intervention, a variety of CATE estimators exist to accomplish this task [1, 3, 16, 23, 30, 34, 44, 55, 36, 70]. These methods can then predict the effect of an existing intervention $(W)$ on a new individual $(X^{\prime})$ . + +However, in many real-world applications natural experiment data is entirely unavailable, and yet CATE estimation is critical. For instance, when new drugs are discovered, or new government policies + +are passed, it is important to know the effect of these novel interventions on individuals and subgroups in advance, i.e., before anybody is treated. There is thus a need for methods that can predict the effect of a novel intervention $(W^{\prime})$ on a new individual $(X^{\prime})$ in a zero-shot fashion, i.e., without relying on any historical data from individuals who received the intervention. + +Generalizing to novel interventions is especially challenging because it requires generalizing across two dimensions simultaneously: to new interventions and new individuals. This entails efficiently "aligning" newly observed interventions to the ones previously observed in the training data. + +Present work. Here, we first formulate the zero-shot CATE estimation problem. We then propose CaML (Causal Meta-learning), a general framework for training a single meta-model to estimate CATE across many interventions, including novel interventions that did not exist at the time of model training (Figure 1). Our key insight is to frame CATE estimation for each intervention as a separate meta-learning task. For each task observed during training, we sample a retrospective natural experiment consisting of both (a) individuals who did receive the intervention, and (b) individuals who did not receive the intervention. This natural experiment data is used to estimate the effect of the intervention for each individual (using any off-the-shelf CATE estimator), which serves as the training target for the task. + +In order to achieve zero-shot generalization to new interventions, we include information $(W)$ about the intervention (e.g., a drug's attributes), in the task. We then train a single meta-model which fuses intervention information with individual-level features $(X)$ to predict the intervention's effect. Our approach allows us to predict the causal effect of novel interventions, i.e., interventions without sample-level training data, such as a newly discovered drug (Figure 1). We refer to this capability as zero-shot causal learning. + +In our experiments, we evaluate our method on two real-world datasets—breaking convention with the CATE methods literature which typically relies on synthetic and semi-synthetic datasets. Our experiments show that CaML is both scalable and effective, including the application to a large-scale medical dataset featuring tens of millions of patients. Most strikingly, CaML's zero-shot performance exceeds even strong baselines that were trained directly on data from the test interventions. We further discover that CaML is capable of zero-shot generalization even under challenging conditions: when trained only on single interventions, at inference time it can accurately predict the effect of combinations of novel interventions. Finally, we explain these findings, by proving a zero-shot generalization bound. + +# 2 Related work + +We discuss recent work which is most closely related to zero-shot causal learning, and provide an extended discussion of other related work in Appendix B. Most CATE estimators do not address novel interventions, requiring that all considered interventions be observed during training. A notable exception is recent methods which estimate CATE for an intervention using structured information about its attributes [25, 35]. In principle, these methods can also be used for zero-shot predictions. These methods estimate CATE directly from the raw triplets $(W, X, Y)$ , without considering natural experiments, by tailoring specific existing CATE estimators (the S-learner [44] and Robinson decomposition [55], respectively) to structured treatments. The main drawback of these approaches is that they are inflexible, i.e., they are restricted to using a single estimator and are unable to take advantage of the recent advances in the broader CATE estimation literature (e.g., recently developed binary treatment estimators [16, 21, 40]). This is a limitation because any single CATE estimator can be unstable across different settings [15]. Notably, the estimators which these methods build on have already been shown to result in high bias in many domains [44, 37, 10, 16]. Likewise, we find that these methods struggle with zero-shot predictions (Section 6). CaML's key difference from prior work is that we construct a separate task for each training intervention by synthesizing natural experiments. This allows us to (a) flexibly wrap any existing CATE estimator to obtain labels for each task, and thus take advantage of the most recent CATE estimation methods and (b) leverage meta-learning, which requires task-structured data. Consequently, CaML is able to achieve strong zero-shot performance (Section 6). + +# 3 Background: single-intervention CATE estimation + +Each task in the CaML framework consists of estimating conditional average treatment effects (CATEs) for a single binary treatment. In this section, we first provide background on CATE + +![](images/6fe99a5becaec144602b0237e689236be5fe2589814dd22c0b73e1f6cd601425.jpg) +Figure 1: Overview of the zero-shot causal learning problem. Each individual has features $(X)$ , an intervention with features $(W)$ , and an outcome $(Y)$ . Lightning bolts $(\mathcal{F})$ represent interventions (e.g. drugs). The personalized effect of an intervention $(\tau)$ is always unobserved. The goal is to predict the $\tau$ for a novel intervention $(W^{\prime})$ and individual $(X^{\prime})$ that did not exist during training. + +![](images/e617ba4a2827dcd6f51cf4af309588b2513ce70fc9910558ac45f88661ebf88a.jpg) + +![](images/b95c4861ebd50393a71c36ad64e1bd000b9ee82137ea2afc0984d2e76f010585.jpg) + +estimation under this simple case of a single treatment $(W)$ and outcome $(Y)$ , and subsequently generalize it to our zero-shot setting. Under a single intervention and outcome, we consider $n$ independent observations $P_{1},\ldots ,P_{n}$ drawn from a distribution $\mathcal{P}$ . For unit $i = 1,\dots,n$ , $P_{i} = (W_{i},X_{i},Y_{i})\sim \mathcal{P}$ collects: a binary or continuous outcome of interest $Y_{i}\in \mathcal{V}\subset \mathbb{R}$ , instance features (i.e., pre-treatment covariates) $X_{i}\in \mathcal{X}\subset \mathbb{R}^{d}$ , and a treatment-assignment indicator $W_{i}\in \{0,1\}$ . We use the Neyman-Rubin potential outcomes framework [33], in which $Y_{i}(1),Y_{i}(0)$ reflect the outcome of interest either under treatment $(W_{i} = 1)$ , or under control $(W_{i} = 0)$ , respectively. In our running medical example, $Y_{i}(1)$ is the health status if exposed to the drug, and $Y_{i}(0)$ is the health status if not exposed to the drug. Notably, the fundamental problem of causal inference is that we only observe one of the two potential outcomes, as $Y_{i} = W_{i}\cdot Y_{i}(1) + (1 - W_{i})\cdot Y_{i}(0)$ (e.g., either health status with or without drug exposure can be observed for a specific individual, depending on whether they are prescribed the drug). However, it is possible to make personalized decisions by estimating treatment effects that are tailored to the attributes of individuals (based on features $X$ ). Thus, we focus on estimating $\tau (x)$ , known as the conditional average treatment effect (CATE): + +$$ +\operatorname {C A T E} = \tau (x) = \mathbb {E} _ {\mathcal {P}} \left[ Y (1) - Y (0) \mid X = x \right] \tag {1} +$$ + +A variety of methods have been developed to estimate $\tau(x)$ from observational data [16]. These rely on standard assumptions of unconfoundedness, consistency, and overlap [52]. Unconfoundedness: there are no unobserved confounders, i.e. $Y_{i}(0), Y_{i}(1) \perp W_{i} \mid X_{i}$ . Consistency: $Y_{i} = Y_{i}(W_{i})$ , i.e. treatment assignment determines whether $Y_{i}(1)$ or $Y_{i}(0)$ is observed. Overlap: Treatment assignment is nondeterministic, such that for all $x$ in support of $X$ : $0 < P(W_{i} = 1 \mid X_{i} = x) < 1$ . + +# 4 Zero-shot causal learning + +In many real-world settings (e.g. drugs, online A/B tests) novel interventions are frequently introduced, for which no natural experiment data are available. These settings require zero-shot CATE estimates. The zero-shot CATE estimation problem extends the prior section, except the intervention variable $W_{i}$ is no longer binary, but rather contains rich information about the intervention: $W_{i}\in \mathcal{W}\subset \mathbb{R}^{e}$ (e.g., a drug's chemistry), where $W_{i} = 0$ corresponds to a sample that did not receive any intervention. Thus, each intervention value $w$ has its own CATE function that we seek to estimate: + +$$ +\mathrm {C A T E} _ {w} = \tau_ {w} (x) = \mathbb {E} _ {\mathcal {P}} \left[ Y (w) - Y (0) \mid X = x \right], \tag {2} +$$ + +During training, we observe $n$ independent observations $P_{1},\ldots ,P_{n}$ drawn from a distribution $\mathcal{P}$ . Each $P_{i} = (W_{i},X_{i},Y_{i})\sim \mathcal{P}$ . Let $\mathcal{W}_{seen}$ be set of all interventions observed during training. The zero-shot CATE estimation task consists of estimating CATE for a novel intervention that was never observed during training: + +Problem 1 (Zero-shot CATE estimation). Given $n$ training observations $(W_{1},X_{1},Y_{1}),\ldots ,(W_{n},X_{n},Y_{n})$ drawn from $\mathcal{P}$ containing intervention information, individual features, and outcomes... estimate $\tau_{w^{\prime}}(x)$ for a novel intervention $w^{\prime}\notin \mathcal{W}_{seen}$ + +This problem formulation extends in a straightforward manner to combinations of interventions, by allowing a single intervention $W_{i}$ to consist of a set of intervention vectors. CaML supports combinations of interventions, as we elaborate on in Section 4.1 + +![](images/67738677740886f6ce9fca14baa6753fb656dbd19eb4b6d56bb93d37273082f9.jpg) +Figure 2: Visual illustration of the CaML (causal meta-learning) framework. (1) We sample a task (i.e., an intervention) and a natural experiment from the training data consisting of individuals who either received the intervention $(\mathrm{W} = \{\mathbf{\theta}\})$ , or did not $(\mathrm{W} = \{\cdot\})$ . Each individual has features $(X)$ and an outcome $(Y)$ , and the intervention also has information $(W)$ (e.g., a drug's attributes). (2) For each individual we estimate the effect of the intervention on the outcome (pseudo-outcomes $\tilde{\tau}$ ). (3) We predict an individual's pseudo-outcomes $\tilde{\tau}$ using a model that fuses $X$ and $W$ . CaML is trained by repeating this procedure across many tasks and corresponding natural experiments. + +![](images/48a3702f1c06106319e3333dc092e7278e5cd0ed222a57ad4bf478b0ebc79316.jpg) + +![](images/ea8f4910441d62ab96599d5637e493e068ac736ad5b2c76c6c8a8821b9288111.jpg) + +CaML overview. We propose a novel framework for estimating CATE across multiple interventions, even including ones that were never encountered during training. Our framework consists of three key components (Figure 2). First, we formulate CATE estimation as a meta-learning problem in which each task corresponds to the CATE estimation for a unique intervention. A task dataset for a given intervention is constructed by sampling a natural experiment of all individuals who received the intervention, and a sample of individuals who did not. Tasks are augmented with intervention information $(W)$ . Synthesizing these natural experiments allows us to compute a noisy CATE label $\tilde{\tau}$ using any off-the-shelf estimator ( $\tilde{\tau}$ is referred to as pseudo-outcomes by the causal inference literature [16]). Finally, we train a single meta-model to predict these labels using individual-level $(X)$ and intervention-level $(W)$ information, such that it is able to generalize to novel tasks, i.e., estimating CATE for novel interventions. + +The CaML framework incorporates three important design considerations: (1) Single meta-model. In domains such as electronic health records and online marketing, we observe that large-scale datasets contain thousands of interventions with rich feature information $(W)$ . Instead of training a separate model for each intervention, CaML trains a single meta-model that can estimate CATE across all interventions. This approach lets us leverage shared structure across tasks and generalize to novel interventions that were not present during training. (2) Pseudo-outcomes. Instead of directly modeling the response surfaces $\mathbb{E}[Y(w) \mid X = x]$ and $\mathbb{E}[Y(0) \mid X = x]$ , we use pseudo-outcomes for each intervention to train our model. This approach is informed by recent studies indicating bias in estimating CATE from direct predictions of observed outcomes [10, 44]. CaML outperforms strong baselines that meta-learn $Y(w)$ and $Y(0)$ directly, as demonstrated in our experiments (see Tables 2 and 3, rows S-learner and T-learner with meta-learning). (3) Discrete tasks from continuous interventions. CaML takes advantage of the extensive literature on CATE estimation for single, binary interventions. By creating a natural experiment for each intervention, CaML taps into this literature and benefits from the high performance of recently developed nonparametric CATE estimators [16, 55, 44]. + +CaML identifies CATE for novel interventions under the assumptions that: (1) for each observed intervention $w$ , $\tau_w(x)$ is identifiable under the binary treatment assumptions (unconfoundedness, consistency, and overlap) in Section 3. This allows for valid training labels for each task. (2) $\tau_w(x) = \tau(w, x)$ , i.e., a global function $\tau(w, x)$ unifies all intervention-specific CATE functions, (3) $\tau(w, x)$ is continuous in $w$ . This allows the model to smoothly extrapolate the treatment effect to new interventions that are close to observed interventions in the intervention space. Lastly, (4) $W$ follows a continuous distribution. + +# 4.1 Meta-dataset + +We formulate CATE estimation as a meta-learning problem. For this, each task refers to CATE estimation for a distinct intervention. Interventions as well as tasks in our meta-dataset are jointly indexed by $j \in \mathbb{N}$ with $1 \leq j \leq K$ , such that we can refer to the $j$ -th intervention information with $w^{(j)}$ . + +We then construct a meta-dataset $D$ in the following way: + +$$ +D = \left\{\left(D _ {\text {t r e a t e d}} ^ {(j)} \cup D _ {\text {c o n t r o l}} ^ {(j)}, w ^ {(j)}\right) \right\} _ {j = 1} ^ {K}, \text {w i t h} \tag {3} +$$ + +$$ +D _ {\text {t r e a t e d}} ^ {(j)} = \left\{\left(X _ {i}, Y _ {i}\right) \mid W _ {i} = w ^ {(j)} \right\} \text {a n d} D _ {\text {c o n t r o l}} ^ {(j)} = \left\{\left(X _ {i}, Y _ {i}\right) \mid W _ {i} = 0\right) \}. \tag {4} +$$ + +$D^{(j)}$ denotes the natural experiment dataset for task $j$ , composed of a treated group (instances which received the intervention, i.e. $W_{i} = w^{(j)}$ ) and control group (instances which did not receive any intervention, i.e. $W_{i} = 0$ ). Each sample $i$ represents an individual, for which the quantities $(X_{i}, Y_{i})$ are collected as introduced in Section 3. In practice, we down-sample both groups (i.e. to 1 million samples for the treated and control groups) in our large-scale experiments. + +We augment each task dataset $D^{(j)}$ with intervention information, $w^{(j)} \in \mathbb{R}^e$ , for zero-shot generalization to new interventions [35, 18, 87, 39]. The form of $w^{(j)}$ varies with the problem domain — for text interventions, it could be a language model's text embedding [79, 84, 58], while biomedical treatments can be represented as nodes in a knowledge graph [8, 49]. Additionally, domain-specific features, like treatment categories from an ontology, may be included in $w^{(j)}$ . To handle combinations of interventions (e.g., pairs of drugs), we aggregate the $w$ for each intervention using an order-invariant pooling operation (we used the sum operator), and sample a separate natural experiment for individuals who received the full combination. + +# 4.2 Estimating pseudo-outcomes + +We next estimate the training targets for each task (i.e. intervention) in the meta-dataset. The training target $(\tilde{\tau}^{(j)})$ is an unbiased, but noisy, estimate of CATE. More formally, for each task $j$ (which points to the natural experiment dataset for intervention $w^{(j)}$ ), we estimate $\tilde{\tau}^{(j)}$ , where $\mathbb{E}_{\mathcal{P}}[\tilde{\tau}^{(j)}|X = x] = \tau_{w^{(j)}}(x)$ . Thus, $\tilde{\tau}_i^{(j)}$ denotes the target for the $i$ -th sample in the $j$ -th task (indexing will be omitted when it is clear from context). We refer to these targets as pseudo-outcomes, following prior literature [16]. For prior work on pseudo-outcomes, refer to Appendix B. In Appendix E we demonstrate why these pseudo-outcomes provide an unbiased training objective. For a detailed explanation on the necessity of using pseudo-outcomes instead of directly modeling $Y(w)$ and $Y(0)$ , please see [44, 16, 10]. + +CaML is agnostic to the specific choice of pseudo-outcome estimator. Thus, we assume a function $\eta(D^{(j)})$ which takes as input a task dataset $D^{(j)} \in D$ and returns a vector containing the pseudo-outcomes $\tilde{\tau}$ for each sample in the task. We extend each task dataset $D^{(j)}$ with the pseudo-outcomes, such that a sample holds the elements $(X_i, Y_i, \tilde{\tau}_i)$ . Our key insight is that by collecting these pseudo-outcomes across multiple tasks, and predicting them using a combination of intervention and individual information $(W, X)$ we can develop a CATE estimator which generalizes to novel interventions. In practice, we use the RA-learner [17] and treat pseudo-outcome estimation as a data pre-processing step (Appendix C.6). + +# 4.3 Meta-model training + +Given $m$ target outcomes $Y_{1},\ldots ,Y_{m}$ (e.g., different drug side effects), our goal is then to learn a model $\Psi_{\theta}:\mathbb{R}^{e}\times \mathbb{R}^{d}\to \mathbb{R}^{m}$ that for parameters $\theta$ minimizes + +$$ +\theta^ {*} = \underset {\theta} {\operatorname {a r g m i n}} \mathbb {E} _ {j \sim U (D)} \mathbb {E} _ {W, X, \tilde {\tau} \sim D ^ {(j)}} \left[ L \left(\Psi_ {\theta}\right) \right], \tag {5} +$$ + +where $U(D)$ denotes the discrete uniform distribution over the tasks of the meta-dataset $D$ , and where $L(f)$ refers to a standard loss function between the pseudo-outcomes and the model output, i.e., $L(f) = (\tilde{\tau} - f(w, x))^2$ . To assess whether the model generalizes to novel tasks, we partition our meta-dataset by task, into non-overlapping subsets $D = D_{\mathrm{train}} \cup D_{\mathrm{val}} \cup D_{\mathrm{test}}$ . During training, $\Psi_{\theta}$ is optimized on training tasks $D_{\mathrm{train}}$ . We validate and test this model on $D_{\mathrm{val}}$ and $D_{\mathrm{test}}$ , which are thus unseen during training tasks. While the CaML framework is agnostic to a specific training strategy, we based our approach (Algorithm 1) on the Reptile meta-learning algorithm [53] which we find performs better compared to straightforward empirical risk minimization (c.f. Section 6). For this, the objective is slightly modified to + +$$ +\theta^ {*} = \underset {\theta} {\operatorname {a r g m i n}} \mathbb {E} _ {j \sim U (D)} \left[ L \left(A _ {D ^ {j}} ^ {k} (\Psi_ {\theta})\right) \right], \tag {6} +$$ + +Algorithm 1 The CaML algorithm +Require: meta-dataset $D$ , meta-model $\Psi_{\theta}$ with initialized parameters $\theta$ , hyperparameter $k$ . +for iteration $= 1,2,\ldots,L$ do + $j \leftarrow \text{SAMPLETASK()}$ $D_{\text{treat}}^{(j)}, D_{\text{ctrl}}^{(j)}, w^{(j)} \leftarrow \text{QUERYTASKDATA}(j)$ $\tilde{\tau}^{(j)} \leftarrow \text{ESTIMATEPSEUDOOUTCOMES}(D_{\text{treat}}^{(j)}, D_{\text{ctrl}}^{(j)})$ $\theta' \leftarrow \text{ADAPT}((D_{\text{treat}}^{(j)}, D_{\text{ctrl}}^{(j)}), \tilde{\tau}^{(j)}, w^{(j)}, \Psi_{\theta}, k)$ $g \leftarrow \theta - \theta'$ {Reptile gradient} + $\theta \leftarrow \theta - \beta g$ {Gradient step for meta-model $\Psi_{\theta}$ } +end for +return $\Psi_{\theta}$ +end function + +where $A_D^k \colon \mathcal{F} \to \mathcal{F}$ represents the operator that updates a model $f \in \mathcal{F}$ using data sampled from the dataset $D$ for $k$ gradient steps. This operator is defined in more detail as the ADAPT routine in Algorithm 1. Note that depending on the choice of CATE estimator, this routine iterates only over treated samples of a task dataset $D^{(j)}$ (as in our experiments), or over all samples, including untreated ones. + +# 4.4 CaML architecture + +To parameterize $\Psi_{\theta}$ , we propose a simple but effective model architecture (see Section 6): + +$$ +\Psi_ {\theta} (w, x) = \operatorname {M L P} _ {1} ([ \tilde {w}; \tilde {x} ]), \text {w i t h} \tilde {x} = \operatorname {M L P} _ {2} (x) \text {a n d} \tilde {w} = \operatorname {M L P} _ {3} (w), \tag {7} +$$ + +where $[\cdot ;\cdot ]$ denotes concatenation. Equation 7 shows that the intervention information $w$ and individual features $x$ are encoded separately into dense vectors $\tilde{w}$ and $\tilde{x}$ , respectively. Our MLPs consist of layers of the form $g(z) = z + \mathrm{ReLU}(\mathrm{Linear}(z))$ . + +# 5 Theoretical analysis + +We now consider zero-shot causal learning from a theoretical perspective. Under simplified assumptions, we bound the prediction error in the zero-shot setting. + +We formulate the setting as a supervised learning problem with noisy labels (pseudo-outcomes) where we learn a smooth function $f = \Psi(w, x) \rightarrow \tau$ among a family $\mathcal{F}$ . We focus on $\tau \in \mathbb{R}$ , and assume $\tau \in [0,1]$ without loss of generality, since we can normalize $\tau$ to this range. The training dataset has $n$ interventions with $m$ samples each, i.e., first $n$ i.i.d. draws from $P_W$ : $w^{(1)}, \ldots, w^{(n)}$ and then for each $w^{(j)}$ , $m$ i.i.d. draws from $P_X$ : $x_1^{(j)}, \ldots, x_m^{(j)}$ . + +The main theorem quantifies the rate that combining information across different interventions helps with zero-shot performance. We prove a finite-sample generalization bound for the ERM variant of CaML. The ERM is a special case of ADAPT with $k = 1$ that is more conducive to rigorous analysis. The advantage of Reptile over ERM is orthogonal and we refer the readers to the original discussion [54]. We assume the estimated pseudo-outcomes $\tilde{\tau}$ during training satisfy $\tilde{\tau} = \tau + \xi$ where $\xi$ is an independent zero-mean noise with $|\xi| \leq \epsilon$ almost surely for some $\epsilon \geq 0$ , + +$$ +\hat {f} = \min _ {f \in \mathcal {F}} \hat {L} (f) = \min _ {f} \frac {1}{n m} \sum_ {j = 1} ^ {n} \sum_ {i = 1} ^ {m} \left(f \left(w ^ {(j)}, x _ {i} ^ {(j)}\right) - \tilde {\tau} _ {i} ^ {(j)}\right) ^ {2}. +$$ + +The test error is $L(f) = \mathbb{E}_{W,X,\tau}[(f(w,x) - \tau)^2]$ . Let $f^{*} = \min_{f}L(f)$ . We bound the excess loss $L(\hat{f}) - L(f^{*})$ . Our key assumption is that interventions with similar features $W$ have similar effects in expectation. We assume that all functions in our family are smooth with respect to $W$ , i.e., $\forall f\in \mathcal{F},\mathbb{E}_{W,X}\left[\| \partial f / \partial W\| _2^2\right]\leq \beta^2$ . + +Theorem 1. Under our assumptions, with probability $1 - \delta$ + +$$ +\begin{array}{l} L (\hat {f}) \leq L (f ^ {*}) + 8 (1 + \epsilon) R _ {n m} (\mathcal {F}) + 8 \sqrt {\frac {(1 + \epsilon) R _ {n m} (\mathcal {F}) \log (1 / \delta)}{n}} + \frac {2 \log (1 / \delta)}{3 n} + \\ (1 + \epsilon) \sqrt {\frac {(3 2 C \beta^ {2} + 2 (1 + \epsilon) ^ {2} / m) \log (1 / \delta)}{n}} \\ \end{array} +$$ + +where $R_{nm}$ is a novel notion of zero-shot Rademacher complexity defined in equation (9); $C$ is a Poincaré constant that only depends on the distribution of $W$ . For large $n, m$ , the leading terms are the function complexity $R_{nm}(\mathcal{F})$ , and an $O(\sqrt{1/n})$ term with a numerator that scales with $\beta$ and $(1 + \epsilon)^2/m$ . This validates our intuition that when the intervention information $W$ is more informative of the true treatment effects (smaller $\beta$ ), and when the estimation of $\tau$ in the training dataset is more accurate, the performance is better on novel interventions. Please refer to Section A for the full proof. Compared to standard generalization bound which usually has a $\sqrt{1/n}$ term, our main technical innovation involves bounding the variance by the smoothness of the function class plus Poincaré-type inequalities. When $\beta$ is much smaller than 1 we achieve a tighter bound. + +
DatasetSamplesFeatures (X)Outcome (Y)Intervention typeIntervention information (W)
ClaimsPatientsPatient history (binned counts of medical codes)Pancytopenia onsetDrug intake (pre-scription)Drug embedding (knowledge graph)
LINCSCell linesCancer cell encyclope-diaExpression of land-mark genes (DEG)Perturbagen (small molecule)Molecular embeddings (RDKit)
+ +Table 1: High-level overview of our two experimental settings. Details in Appendix C.1. + +# 6 Experiments + +We explore to what extent zero-shot generalization is practical when predicting the effects of interventions. We thus design two novel evaluation settings using real-world data in domains where zero-shot CATE estimation will be highly impactful: (1) Health Insurance Claims: predicting the effect of a drug on a patient, and (2) LINCS: predicting the effect of a perturbation on a cell. We use new datasets because existing causal inference benchmarks [31, 73] focus on a single intervention. By contrast, zero-shot causal learning must be conceptualized in a multi-intervention setting. + +Zero-shot Evaluation. Each task corresponds to estimating CATE for a single intervention, across many individual samples (e.g. patients). We split all tasks into meta-training/meta-validation, and a hold-out meta-testing set for evaluating zero-shot predictions (Table 2, unseen drugs for Claims and Table 3, unseen molecular perturbations in LINCS). For the Claims dataset, we also consider the challenging setting of combinations of unseen drugs (Table 5). + +Each meta-validation and meta-testing task contains a natural experiment of many samples (e.g., patients) who received the unseen intervention, and many control samples who did not receive the intervention. The same patient (Claims) or cell-line (LINCS) can appear in multiple tasks (if they received different interventions at different times). Thus, to ensure a fair zero-shot evaluation, we exclude all samples who have ever received a meta-testing intervention from meta-val/meta-train. Similarly, we exclude all meta-validation patients from meta-train. Details on holdout selection are provided in Appendix C.2. + +Table 1 gives an overview of both benchmarks. In the Claims dataset, we compare zero-shot predictions with strong single-intervention baselines which cannot generalize to unseen interventions. To do so, we further split each task in meta-validation and meta-testing into a train/test (50/50) split of samples. These baselines are trained on a task's train split, and all methods are evaluated on the test split of the meta-testing tasks. On the LINCS dataset, as each task consists of $< 100$ cells, single-intervention baselines performed weakly and are excluded from analysis. + +Baselines. We compare the zero-shot performance of CaML to two distinct categories of baselines. (1) Trained directly on test interventions. These are strong CATE estimators from prior work and + +can only be trained on a single intervention. Thus, we train a single model on each meta-testing task's train split, and evaluate performance on its test split. This category includes T-learner [44], X-learner [44], RA-learner [16], R-learner [55], DragonNet [72], TARNet [70], and FlexTENet [17]. + +(2) Zero-shot baselines are trained across all meta-training tasks and are able to incorporate intervention information $(W)$ . These methods are thus, in principle, capable of generalizing to unseen interventions. We use GraphITE [25] and Structured Intervention Networks (SIN) [35]. We also introduce two strong baselines which learn to directly estimate $Y(w)$ and $Y(0)$ by meta-learning across all training interventions, without using pseudo-outcomes: S-learner and T-learner with meta-learning. These extend the S-learner and T-learner from prior work [44] to incorporate intervention information $(W)$ in their predictions. We elaborate on implementation details of baselines in Appendix C.7. For details on hyperparameter search and fair comparison, see Appendix C.1. + +Ablations. In our first ablation experiment (w/o meta-learning), we trained the CaML model without meta-learning, instead using the standard empirical risk minimization (ERM) technique [78]. Our second ablation (w/o RA-learner) assesses the sensitivity of CaML's performance to different pseudooutcome estimation strategies. For further details on how these ablation studies were implemented, see Appendix C.3. We discuss the key findings from these ablations in Section 6.3. + +# 6.1 Setting 1: Personalized drug side effect prediction from large-scale medical claims + +Our first setting (Claims) is to predict the increased likelihood of a life-threatening side effect caused by a drug prescription. We leverage a large-scale insurance claims dataset of over 3.5 billion claims across 30.6 million patients in the United States2. Each date-stamped insurance claim contains a set of diagnoses (ICD-10 codes), drug prescriptions (DrugBank ID), procedures (ICD-10 codes), and laboratory results (LOINC codes). Laboratory results were categorized by whether the result was high, low, normal, abnormal (for non-continuous labs), or unknown. + +Interventions are administration of one drug $(n = 745)$ , or two drugs $(n = 22,883)$ prescribed in combination. Time of intervention corresponds to the first day of exposure. Intervention information $(W)$ was generated from pre-trained drug embeddings from a large-scale biomedical knowledge graph [8] (Appendix C). We compute drug combination embeddings as the sum of the embeddings of the constituent drugs. We focus on the binary outcome $(Y)$ of the occurrence of the side effect pancytopenia within 90 days of intervention exposure. Pancytopenia is a deficiency across all three blood cell lines (red blood cells, white blood cells, and platelets). Pancytopenia is life-threatening, with a $10 - 20\%$ mortality rate [38, 43], and is a rare side effect of many common medications [42] (e.g. arthritis and cancer drugs), which in turn require intensive monitoring of the blood work. Following prior work [24], patient medical history features $(X)$ were constructed by time-binned counts of each unique medical code (diagnosis, procedure, lab result, drug prescription) at seven different time scales before the drug was prescribed, resulting in a total of 443,940 features. For more details, refer to Appendix C.1. + +Metrics We rely on best practices for evaluating CATE estimators in observational data, as established by recent work [86, 11], which recommend to assess treatment rules by comparing subgroups across different quantiles of estimated CATE. We follow the high vs. others RATE (rank-weighted average treatment effect) approach from Yadlowsky et. al [86], which computes the difference in average treatment effect (ATE) of the top $u$ percent of individuals (ranked by predicted CATE), versus all individuals (for more details, see Appendix C.1). For instance, RATE @ 0.99 is the difference between the top $1\%$ of the samples (by estimated CATE) vs. the average treatment effect (ATE) across all samples, which we would expect to be high if the CATE estimator is accurate. Note that estimates of RATE can be negative if model predictions are inversely associated with CATE. We elaborate on the RATE computation in Appendix C.1. + +The real-world use case of our model is preventing drug prescription for a small subset of high-risk individuals. Thus, more specifically, for each task $j$ , intervention $w_{j}$ in the meta-dataset, and metamodel $\Psi_{\theta}$ , we compute $RATE@u$ for each $u$ in [0.999, 0.998, 0.995, 0.99] across individuals who received the intervention. We use a narrow range for $u$ because pancytopenia is a very rare event occurring in less than $0.3\%$ of the patients in our dataset. Hence, in a real-world deployment scenario, it is necessary to isolate the small subset of high-risk patients from the vast majority of patients for whom there is no risk of pancytopenia onset. + +
RATE @u(↑)Recall @u(↑)Precision @u(↑)
0.999.9980.9950.990.9990.9980.9950.990.9990.9980.9950.99
Random0.000.000.000.000.000.000.010.000.000.000.000.00
T-learner0.320.260.160.100.120.180.260.310.360.290.180.11
X-learner0.060.050.040.030.020.040.080.120.090.070.060.05
R-learner0.190.170.120.080.060.10.190.260.240.210.150.11
RA-learner0.470.370.230.140.170.260.380.450.540.420.260.16
DragonNet0.090.070.050.040.030.050.080.110.150.120.080.06
TARNet0.150.120.070.050.050.080.120.140.180.150.090.06
FlexTENet0.100.090.060.040.040.060.110.160.150.130.090.06
GraphITE0.190.120.050.030.070.080.090.100.230.140.070.04
SIN0.000.000.000.000.000.000.010.020.010.010.010.01
S-learner w/ meta-learning0.210.160.090.050.080.110.150.160.250.180.10.06
T-learner w/ meta-learning0.400.310.180.110.150.220.320.380.450.350.210.13
CaML - w/o meta-learning0.390.310.180.110.150.220.320.390.450.350.220.14
CaML - w/o RA-learner0.450.360.220.140.160.240.340.410.480.380.260.16
CaML (ours)0.480.380.230.130.180.270.380.450.540.430.260.16
+ +Table 2: Performance results for the Claims dataset (predicting the effect of drug exposure on pancytopenia onset from patient medical history). Key findings are (1) CaML outperforms all zero-shot baselines (RATE is $18 - 27\%$ higher than T-Learner w/ meta-learning, the strongest zero-shot baseline) (2) CaML performs stronger (up to $8\times$ higher RATE values) than 6 of the 7 baselines which are trained directly on the test interventions, and performs comparably to the strongest baseline trained directly on the test interventions (RA-learner). Mean is reported across all runs; standard deviations included in (Appendix Table 4). Analogous trends hold for generalization to pairs of unseen drugs (Appendix Table 5). + +Additionally, because our meta-testing dataset consists of individuals treated with drugs known to cause pancytopenia, observational metrics of recall and precision are also a rough proxy for successful CATE estimation (and highly correlated to RATE, Table 2). Thus, as secondary metrics, we also compute $Recall@u$ and $Precision@u$ for the same set of thresholds as RATE, where a positive label is defined as occurrence of pancytopenia after intervention. + +# 6.2 Setting 2: Cellular gene expression response due to perturbation + +Our second setting (LINCS) is to predict how a cell's gene expression $(Y)$ will respond to intervention from perturbagen (small molecule compound such as a drug). This is a critical problem as accurately predicting intervention response will accelerate drug-discovery. We use data for 10,325 different perturbagens from the LINCS Program [74]. Each perturbagen corresponds to a different small molecule. Molecular embeddings were generated using the RDKit featurizer [46] and used as intervention information $(W)$ . Outcomes $(Y)$ of interest are post-intervention gene expression across the top-50 and top-20 differentially expressed landmark genes (DEGs) in the LINCS dataset. We did not look at all 978 genes since most do not show significant variation upon perturbation. We use 19,221 features $(X)$ from the Cancer Cell Line Encyclopedia (CCLE) [22] to characterize each cell-line $(n = 99)$ , each of which correspond to unperturbed gene expression measured in a different lab environment using a different experimental assay. For more details, see Appendix C.1. + +Metrics. A key advantage of experiments on cells is that at evaluation time we can observe both $Y(0)$ and $Y(1)$ for the same cell line $X$ , through multiple experiments on clones of the same cell line in controlled lab conditions. In the LINCS dataset, $Y(0)$ is also measured for all cells which received an intervention. Thus, we can directly compute the precision in estimating heterogeneous effects (PEHE) on all treated cells in our meta-testing dataset, an established measure for CATE estimation performance analogous to mean-squared error [30] (see Appendix C.1). + +# 6.3 Key findings + +CaML's zero-shot predictions outperform baselines with direct access to the target intervention. In the medical claims setting, single intervention baselines (Tables 2, dark grey rows) are the highest performing baselines as we train them directly on the meta-test intervention. Still, CaML outperforms 6 out of 7 of these baselines (up to $8 \times$ higher RATE) and achieves comparable performance to the strongest of these baselines, the RA-learner. Furthermore, CaML strongly outperforms alternative zero-shot CATE estimators (RATE is $18 - 27\%$ higher than T-Learner w/ meta-learning, the strongest zero-shot baseline). In the LINCS data, multi-intervention learners are strongest as + +PEHE 50 DEGs $(\downarrow)$ PEHE 20 DEGs $(\downarrow)$ + +
Mean.3.784.11
GraphITE3.58 ± 0.0233.82 ± 0.011
SIN3.78± 0.0014.06 ± 0.001
S-learner w/ meta-learning3.63 ± 0.0043.90 ± 0.004
T-learner w/ meta-learning3.61 ± 0.0073.85 ± 0.006
CaML - w/o meta-learning3.57 ± 0.0063.79 ± 0.004
CaML - w/o RA-learner4.28 ± 0.5174.60 ± 0.413
CaML (ours)3.56 ± 0.0013.78 ± 0.005
+ +Table 3: Performance results for the LINCS dataset (predicting the effect of an unseen perturbation on the gene expression of an unseen cell-line). CaML outperforms all baselines. Improvement is largest for the 20 most differentially expressed genes, where most signal is expected. + +there are only a small number of instances (cell lines) per intervention3. CaML outperforms both single-intervention and multi-intervention learners by drawing from both of their strengths—it allows us to use strong CATE estimation methods (i.e. the RA-learner) which previously were restricted to single interventions, while sharing information across multiple interventions. + +CaML learns to generalize from single interventions to combinations of unseen interventions (drug pairs). We evaluate CaML's performance in the challenging setting of predicting the personalized effects of combinations of two drugs which are both unseen during training, while only training on interventions consisting of single drugs. CaML achieves strong performance results (see Appendix Table 5), surpassing the best baseline trained on the test tasks, and outperforms all zero-shot baselines, across all 12 metrics. + +Understanding CaML's performance results. Our ablation studies explain that CaML's performance gains are due to (1) our meta-learning formulation and algorithm (in contrast to the w/o meta-learning row, in which ERM is used to train the model), and (2) the flexible CATE estimation strategy, allowing to take advantage of recently developed CATE estimators previously restricted to single interventions (in contrast to the w/o RA-learner row, in which an alternative pseudo-outcome estimator is used). Lastly, (3) comparison to existing binary intervention CATE estimators trained separately on each meta-testing intervention (Table 2, grey rows) shows that we gain from learning from thousands of interventions. See Appendix C.3 for details on ablations. + +# 7 Conclusion + +We introduce a novel approach to predict the effects of novel interventions. CaML consistently outperforms state-of-the-art baselines, by unlocking zero-shot capability for many recently developed CATE estimation methods which were previously restricted to studying single interventions in isolation. While our study is limited to retrospective data, we plan to prospectively validate our findings. Future work includes designing new model architectures and CATE estimators which learn well under the CaML framework, developing new metrics to evaluate zero-shot CATE estimators, as well as more generally exploring novel learning strategies that enable zero-shot causal learning. + +Societal impacts. In high-stakes decision-making inaccurate predictions can lead to severe consequences. It is important not to overly rely on model predictions and proactively involve domain experts, such as doctors, in the decision-making process. Additionally, it is crucial to ensure that underserved communities are not disadvantaged by errors in treatment effect estimates due to underrepresentation in the training data. Important avenues for achieving equitable CATE estimation in future work include process-oriented approaches (i.e., evaluating model errors for underserved demographics), and outcome-oriented methods (i.e., gauging model impacts on demographic utility) [12, 57, 69, 2]. Furthermore, the deployment of CATE models could raise privacy concerns. These models typically require access to individual patient data to estimate personalized treatment effects accurately. Ensuring the privacy and security of this sensitive information is crucial to avoid potential data breaches or unauthorized access, which could harm patients and erode public trust in healthcare systems. + +# Acknowledgements + +We are deeply grateful to Stefan Wager for his invaluable insights and extensive contributions to our discussions. We thank Emma Pierson, Kexin Huang, Kaidi Cao, Yanay Rosen, Johann Gaebler, Maria Brbic, Kefang Dong, June Vuong for helpful conversations. H.N was supported by a Stanford Knight-Hennessy Scholarship and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518. M.M. was supported by DARPA N660011924033 (MCS), NIH NINDS R61 NS11865, GSK, Wu Tsai Neurosciences Institute. A.S and S.O were supported by the American Slovenian Education Foundation (ASEF) fellowship. M.Y was supported by the Microsoft Research PhD fellowship. Y.R was supported by funding from GlaxoSmithKline LLC. Y.C. was supported by Stanford Graduate Fellowship and NSF IIS 2045685. We also gratefully acknowledge the support of Stanford HAI for Google Cloud Credits, DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. + +# References + +[1] Ahmed M Alaa and Mihaela Van Der Schaar. Bayesian inference of individualized treatment effects using multi-task gaussian processes. Advances in neural information processing systems, 30, 2017. +[2] Tim Althoff, Hamed Nilforoshan, Jenna Hua, and Jure Leskovec. Large-scale diet tracking data reveal disparate associations between food environment and diet. Nature communications, 13(1):267, 2022. +[3] Susan Athey and Guido Imbens. Recursive partitioning for heterogeneous causal effects. Proceedings of the National Academy of Sciences, 113(27):7353-7360, 2016. +[4] Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. CiteSeer, 1990. +[5] Ioana Bica, Ahmed M Alaa, Craig Lambert, and Mihaela Van Der Schaar. From real-world patient data to individualized treatment effects using machine learning: current and future methods to address underlying challenges. Clinical Pharmacology & Therapeutics, 109(1):87-100, 2021. +[6] Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford university press, 2013. +[7] Olivier Bousquet. A pennett concentration inequality and its application to suprema of empirical processes. Comptes Rendus Mathematique, 334(6):495-500, 2002. +[8] Payal Chandak, Kexin Huang, and Marinka Zitnik. Building a knowledge graph to enable precision medicine. bioRxiv, 2022. +[9] Hong-Bin Chen, Sinho Chewi, and Jonathan Niles-Weed. Dimension-free log-sobolev inequalities for mixture distributions. Journal of Functional Analysis, 281(11):109236, 2021. +[10] Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters, 2018. +[11] Victor Chernozhukov, Mert Demirer, Esther Duflo, and Ivan Fernandez-Val. Generic machine learning inference on heterogeneous treatment effects in randomized experiments, with an application to immunization in india. Technical report, National Bureau of Economic Research, 2018. +[12] Sam Corbett-Davies, J Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness. J. Mach. Learn. Res, 2023. +[13] Richard K Crump, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik. Nonparametric tests for treatment effect heterogeneity. The Review of Economics and Statistics, 90(3):389-405, 2008. +[14] Alicia Curth, David Svensson, Jim Weatherall, and Mihaela van der Schaar. Really doing great at estimating cate? a critical look at ml benchmarking practices in treatment effect estimation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. +[15] Alicia Curth and Mihaela van der Schaar. Doing great at estimating cate? on the neglected assumptions in benchmark comparisons of treatment effect estimators. arXiv preprint arXiv:2107.13346, 2021. +[16] Alicia Curth and Mihaela van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. In International Conference on Artificial Intelligence and Statistics, pages 1810-1818. PMLR, 2021. +[17] Alicia Curth and Mihaela van der Schaar. On inductive biases for heterogeneous treatment effect estimation. Advances in Neural Information Processing Systems, 34:15883-15894, 2021. +[18] Gerald DeJong and Raymond Mooney. Explanation-based learning: An alternative view. Machine learning, 1986. +[19] Qiaonan Duan, Corey Flynn, Mario Niepel, Marc Hafner, Jeremy L Muhlich, Nicolas F Fernandez, Andrew D Rouillard, Christopher M Tan, Edward Y Chen, Todd R Golub, et al. Lincs canvas browser: interactive web app to query, browse and interrogate lincs 11000 gene expression signatures. Nucleic acids research, 42(W1):W449-W460, 2014. + +[20] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pages 1126–1135. PMLR, 2017. +[21] Dennis Frauen and Stefan Feuerriegel. Estimating individual treatment effects under unobserved confounding using binary instruments. arXiv preprint arXiv:2208.08544, 2022. +[22] Mahmoud Ghandi, Franklin W Huang, Judit Jané-Valbuena, Gregory V Kryukov, Christopher C Lo, E Robert McDonald, 3rd, Jordi Barretina, Ellen T Gelfand, Craig M Bielski, Haoxin Li, Kevin Hu, Alexander Y Andreev-Drakhlin, Jaegil Kim, Julian M Hess, Brian J Haas, François Aguet, Barbara A Weir, Michael V Rothberg, Brenton R Paolella, Michael S Lawrence, Rehan Akbani, Yiling Lu, Hong L Tiv, Prafulla C Gokhale, Antoine de Weck, Ali Amin Mansour, Coyin Oh, Juliann Shih, Kevin Hadi, Yanay Rosen, Jonathan Bistline, Kavitha Venkatesan, Anupama Reddy, Dmitriy Sonkin, Manway Liu, Joseph Lehar, Joshua M Korn, Dale A Porter, Michael D Jones, Javad Golji, Giordano Caponigro, Jordan E Taylor, Caitlin M Dunning, Amanda L Creech, Allison C Warren, James M McFarland, Mahdi Zamanighomi, Audrey Kauffmann, Nicolas Stransky, Marcin Imielinski, Yosef E Maruvka, Andrew D Cherniack, Aviad Tsherniak, Francisca Vazquez, Jacob D Jaffe, Andrew A Lane, David M Weinstock, Cory M Johannessen, Michael P Morrissey, Frank Stegmeier, Robert Schlegel, William C Hahn, Gad Getz, Gordon B Mills, Jesse S Boehm, Todd R Golub, Levi A Garraway, and William R Sellers. Next-generation characterization of the cancer cell line encyclopedia. Nature, 569(7757):503-508, May 2019. +[23] Donald P Green and Holger L Kern. Modeling heterogeneous treatment effects in survey experiments with bayesian additive regression trees. *Public opinion quarterly*, 76(3):491-511, 2012. +[24] Lin Lawrence Guo, Ethan Steinberg, Scott Lanyon Fleming, Jose Posada, Joshua Lemmon, Stephen R Pfohl, Nigam Shah, Jason Fries, and Lillian Sung. Ehr foundation models improve robustness in the presence of temporal distribution shift. medRxiv, 2022. +[25] Shonosuke Harada and Hisashi Kashima. Graphite: Estimating individual effects of graph-structured treatments. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 659–668, 2021. +[26] Negar Hassanpour and Russell Greiner. Counterfactual regression with importance sampling weights. In *IJCAI*, pages 5880-5887, 2019. +[27] Negar Hassanpour and Russell Greiner. Learning disentangled representations for counterfactual regression. In International Conference on Learning Representations, 2019. +[28] Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer, 2009. +[29] Leon Hetzel, Simon Böhm, Niki Kilbertus, Stephan Gümnmann, Mohammad Lotfollahi, and Fabian Theis. Predicting single-cell perturbation responses for unseen drugs. arXiv preprint arXiv:2204.13545, 2022. +[30] Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217-240, 2011. +[31] Jennifer L Hill, Jeanne Brooks-Gunn, and Jane Waldfogel. Sustained effects of high participation in an early intervention for low-birth-weight premature infants. Developmental psychology, 39(4):730, 2003. +[32] Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(9):5149-5169, 2021. +[33] Guido W Imbens and Donald B Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015. +[34] Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pages 3020-3029. PMLR, 2016. +[35] Jean Kaddour, Yuchen Zhu, Qi Liu, Matt J Kusner, and Ricardo Silva. Causal effect inference for structured treatments. Advances in Neural Information Processing Systems, 34:24841-24854, 2021. + +[36] Edward H Kennedy. Optimal doubly robust estimation of heterogeneous causal effects. arXiv preprint arXiv:2004.14497, 2020. +[37] Edward H Kennedy. Towards optimal doubly robust estimation of heterogeneous causal effects (2020). URL https://arxiv.org/abs, 2020. +[38] Jitender Mohan Khunger, S Arulselvi, Uma Sharma, Sunil Ranga, and VH Talib. Pancytopenia—a clinico haematological study of 200 cases. Indian journal of pathology & microbiology, 45(3):375-379, 2002. +[39] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning (ICML), 2021. +[40] Andrei V Konstantinov, Stanislav R Kirpichenko, and Lev V Utkin. Heterogeneous treatment effect with trained kernels of the nadaraya-watson regression. arXiv preprint arXiv:2207.09139, 2022. +[41] N Kostantinos. Gaussian mixtures and their applications to signal processing. Advanced signal processing handbook: theory and implementation for radar, sonar, and medical imaging real time systems, pages 3-1, 2000. +[42] Michael Kuhn, Ivica Letunic, Lars Juhl Jensen, and Peer Bork. The sider database of drugs and side effects. *Nucleic acids research*, 44(D1):D1075–D1079, 2016. +[43] R Kumar, SP Kalra, H Kumar, AC Anand, and H Madan. Pancytopenia-a six year study. The Journal of the Association of Physicians of India, 49:1078-1081, 2001. +[44] Sören R Künzel, Jasjeet S Sekhon, Peter J Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the national academy of sciences, 116(10):4156-4165, 2019. +[45] Nikolay Kuznetsov and Alexander Nazarov. Sharp constants in the poincaré, steklov and related inequalities (a survey). Mathematika, 61(2):328-344, 2015. +[46] Greg Landrum et al. Rdkit: Open-source cheminformatics. 2006. +[47] Michel Ledoux. Concentration of measure and logarithmic sobolev inequalities. In *Seminaire de probabilités XXXIII*, pages 120–216. Springer, 1999. +[48] Hongzhu Li, Xiangrui Gao, and Yafeng Deng. Stargraph: A coarse-to-fine representation method for large-scale knowledge graph, 2022. +[49] Michelle M Li, Kexin Huang, and Marinka Zitnik. Graph representation learning in biomedicine and healthcare. Nature Biomedical Engineering, pages 1-17, 2022. +[50] Jing Ma, Ruocheng Guo, Aidong Zhang, and Jundong Li. Multi-cause effect estimation with disentangled confounder representation. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021. +[51] Ron Meir and Tong Zhang. Generalization error bounds for bayesian mixture algorithms. Journal of Machine Learning Research, 4(Oct):839-860, 2003. +[52] Stephen L Morgan and Christopher Winship. Counterfactuals and causal inference. Cambridge University Press, 2015. +[53] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018. +[54] Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(3):4, 2018. +[55] Xinkun Nie and Stefan Wager. Quasi-oracle estimation of heterogeneous treatment effects. Biometrika, 108(2):299-319, 2021. +[56] Frank Nielsen and Richard Nock. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Processing Letters, 21(1):10-13, 2013. +[57] Hamed Nilforoshan, Johann D Gaebler, Ravi Shroff, and Sharad Goel. Causal conceptions of fairness and their consequences. In International Conference on Machine Learning, pages 16848-16887. PMLR, 2022. + +[58] Hamed Nilforoshan and Eugene Wu. Leveraging quality prediction models for automatic writing feedback. In Twelfth International AAAI Conference on Web and Social Media, 2018. +[59] Lawrence E Payne and Hans F Weinberger. An optimal poincaré inequality for convex domains. Archive for Rational Mechanics and Analysis, 5(1):286-292, 1960. +[60] Henri Poincaré. Sur les équations aux dérivées partielles de la physique mathématique. American Journal of Mathematics, pages 211-294, 1890. +[61] Zhaozhi Qian, Alicia Curth, and Mihaela van der Schaar. Estimating multi-cause treatment effects via single-cause perturbation. Advances in Neural Information Processing Systems, 34:23754–23767, 2021. +[62] Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157, 2019. +[63] Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In International conference on machine learning, pages 2152-2161. PMLR, 2015. +[64] Yusuf Roohani, Kexin Huang, and Jure Leskovec. Gears: Predicting transcriptional outcomes of novel multi-gene perturbations. bioRxiv, 2022. +[65] Shiv Kumar Saini, Sunny Dhamnani, Akil Arif Ibrahim, and Prithviraj Chavan. Multiple treatment effect estimation using deep generative model with task embedding. In The World Wide Web Conference, pages 1601-1611, 2019. +[66] Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, 29, 2016. +[67] André Schlichting. Poincaré and log-sobolev inequalities for mixtures. Entropy, 21(1):89, 2019. +[68] Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta... hook. PhD thesis, Technische Universität München, 1987. +[69] Laleh Seyyed-Kalantari, Haoran Zhang, Matthew BA McDermott, Irene Y Chen, and Marzyeh Ghassemi. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature medicine, 27(12):2176-2182, 2021. +[70] Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pages 3076-3085. PMLR, 2017. +[71] Ankit Sharma, Garima Gupta, Ranjitha Prasad, Arnab Chatterjee, Lovekesh Vig, and Gautam Shroff. Metaci: Meta-learning for causal inference in a heterogeneous population. arXiv preprint arXiv:1912.03960, 2019. +[72] Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. Advances in neural information processing systems, 32, 2019. +[73] Yishai Shimoni, Chen Yanover, Ehud Karavani, and Yaara Goldschmmidt. Benchmarking framework for performance-evaluation of causal inference analysis. arXiv preprint arXiv:1802.05046, 2018. +[74] Aravind Subramanian, Rajiv Narayan, Steven M Corsello, David D Peck, Ted E Natoli, Xiaodong Lu, Joshua Gould, John F Davis, Andrew A Tubelli, Jacob K Asiedu, David L Lahr, Jodi E Hirschman, Zihan Liu, Melanie Donahue, Bina Julian, Mariya Khan, David Wadden, Ian C Smith, Daniel Lam, Arthur Liberzon, Courtney Toder, Mukta Bagul, Marek Orzechowski, Oana M Enache, Federica Piccioni, Sarah A Johnson, Nicholas J Lyons, Alice H Berger, Alykhan F Shamji, Angela N Brooks, Anita Vrcic, Corey Flynn, Jacqueline Rosains, David Y Takeda, Roger Hu, Desiree Davison, Justin Lamb, Kristin Ardlie, Larson Hogstrom, Peyton Greenside, Nathanael S Gray, Paul A Clemons, Serena Silver, Xiaoyun Wu, Wen-Ning Zhao, Willis Read-Button, Xiaohua Wu, Stephen J Haggarty, Lucienne V Ronco, Jesse S Boehm, Stuart L Schreiber, John G Doench, Joshua A Bittker, David E Root, Bang Wong, and Todd R Golub. A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. Cell, 171(6):1437–1452.e17, November 2017. + +[75] Nicholas P Tatonetti, Patrick P Ye, Roxana Daneshjou, and Russ B Altman. Data-driven prediction of drug effects and interactions. Science translational medicine, 4(125):125ra31-125ra31, 2012. +[76] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012. +[77] Nilesh Tripuraneni, Dhruv Madeka, Dean Foster, Dominique Perrault-Joncas, and Michael I Jordan. Meta-analysis of randomized experiments with applications to heavy-tailed response data. arXiv preprint arXiv:2112.07602, 2021. +[78] Vladimir Vapnik. Principles of risk minimization for learning theory. Advances in neural information processing systems, 4, 1991. +[79] Victor Veitch, Dhanya Sridhar, and David Blei. Adapting text embeddings for causal inference. In Conference on Uncertainty in Artificial Intelligence, pages 919–928. PMLR, 2020. +[80] Stefan Wager. Stats 361: Causal inference, 2020. +[81] Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523):1228-1242, 2018. +[82] Wei Wang, Vincent W Zheng, Han Yu, and Chunyan Miao. A survey of zero-shot learning: Settings, methods, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1-37, 2019. +[83] Yixin Wang and David M Blei. The blessings of multiple causes. Journal of the American Statistical Association, 114(528):1574-1596, 2019. +[84] Galen Weld, Peter West, Maria Glenski, David Arbour, Ryan A Rossi, and Tim Althoff. Adjusting for confounders with text: Challenges and an empirical evaluation framework for causal inference. In Proceedings of the International AAAI Conference on Web and Social Media, volume 16, pages 1109–1120, 2022. +[85] Yongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4582-4591, 2017. +[86] Steve Yadlowsky, Scott Fleming, Nigam Shah, Emma Brunskill, and Stefan Wager. Evaluating treatment prioritization rules via rank-weighted average treatment effects. arXiv preprint arXiv:2111.07966, 2021. +[87] Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In North American Chapter of the Association for Computational Linguistics (NAACL), 2021. +[88] Shing-Tung Yau. Isoperimetric constants and the first eigenvalue of a compact riemannian manifold. In Annales Scientifiques de l'École Normale Supérieure, volume 8, pages 487-507, 1975. +[89] Jinsung Yoon, James Jordon, and Mihaela Van Der Schaar. Ganite: Estimation of individualized treatment effects using generative adversarial nets. In International Conference on Learning Representations, 2018. +[90] Long Yu, Zhicong Luo, Huanyong Liu, Deng Lin, Hongzhu Li, and Yafeng Deng. Triplere: Knowledge graph embeddings via tripled relation vectors. arXiv preprint arXiv:2209.08271, 2022. +[91] Yao Zhang, Alexis Bellot, and Mihaela Schaar. Learning overlapping representations for the estimation of individualized treatment effects. In International Conference on Artificial Intelligence and Statistics, pages 1005–1014. PMLR, 2020. +[92] Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457-i466, 2018. + +# A Zero-shot Rademacher complexity and Proof of Theorem 1 + +# A.1 Problem setup and assumptions + +Let $w \in \mathcal{W} \subseteq \mathbb{R}^e$ denote an intervention and $x \in \mathcal{X} \subseteq \mathbb{R}^d$ denote an individual that received it. Assume the outcome to predict is a scalar $y \in [0,1]$ . The hypothesis class is $\mathcal{F} = \{f : (w, x) \to y\}$ . The dataset has $n$ interventions with $m$ independent units which received each intervention, i.e., first $n$ i.i.d. draws from $P_W$ and then $m$ i.i.d. draws from $P_X$ for each $w^{(j)}$ . During training we have access to noisy estimate $\tilde{y} = y + \xi$ where $\xi$ is an independent noise with $\mathbb{E}\xi = 0$ and $|\xi| \leq \epsilon$ almost surely. We are tested directly on $y$ . + +The ERM is + +$$ +\hat {f} = \min _ {f} \hat {L} (f) = \min _ {f} \frac {1}{n m} \sum_ {j = 1} ^ {n} \sum_ {i = 1} ^ {m} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \hat {y} _ {i} ^ {(j)}) ^ {2}. +$$ + +The test error is + +$$ +L (f) = \mathbb {E} _ {w, x, y} (f (w, x) - y) ^ {2} \tag {8} +$$ + +and let $f^{*} = \min_{f} L(f)$ . + +We are interested in bounding the excess error $L(\hat{f}) - L(f^{*})$ . + +Our key assumption is that interventions with similar attributes $(w)$ have similar effects in expectation. More concretely, we assume that all hypotheses in our family are smooth with respect to $w$ : + +Assumption 2. + +$$ +\forall f \in \mathcal {F}, \mathbb {E} _ {w, x} \left[ \left\| \frac {\partial f}{\partial w} \right\| _ {2} ^ {2} \right] \leq \beta^ {2}. +$$ + +Furthermore, we assume that $P_W$ satisfies a Poincaré-type inequality: + +Assumption 3. For some constant $C$ that only depends on $P_W$ , for any smooth function $F$ , + +$$ +V a r _ {w} [ F (w) ] \leq C \mathbb {E} \left[ \| \nabla_ {w} F (w) \| _ {2} ^ {2} \right]. +$$ + +For example, $P_W$ can be any of the following distributions: + +- Multivariate Gaussian: $w \in \mathbb{R}^e \sim \mathcal{N}(\mu, \Sigma)$ for some vector $\mu \in \mathbb{R}^e$ and positive semi-definite matrix $\Sigma \in \mathbb{R}^{e \times e}$ ; +- $w \in \mathbb{R}^e$ has independent coordinates; each coordinate has the symmetric exponential distribution $1 / 2e^{-|t|}$ for $t \in \mathbb{R}$ . +- $P_W$ is a mixture over base distributions satisfying Poincaré inequalities, and their pair-wise chi-squared distances are bounded. +- $P_W$ is a mixture of isotropic Gaussians in $\mathbb{R}^e$ . +- $P_W$ is the uniform distribution over $\mathcal{W} \subset \mathbb{R}^e$ , which is open, connected, and bounded with Lipschitz boundary. + +We note that isotropic Gaussian can approximate any smooth densities in $\mathbb{R}^e$ [41] (since RBF kernels are universal), showing that Assumption 3 is fairly general. We define a novel notion of function complexity specialized to the zero-shot setting. Intuitively, it measures how well we can fit random labels, which is first drawing $n$ interventions and $m$ recipients for each intervention. For examples of concrete upper bound on zero-shot Rademacher complexity see section A.4. + +$$ +R _ {n m} (F) = \frac {1}{n m} \mathbb {E} _ {w, x, \sigma} \sup _ {f} \sum_ {j = 1} ^ {n} \sum_ {i = 1} ^ {m} \sigma_ {i} ^ {j} f \left(w ^ {(j)}, x _ {i} ^ {(j)}\right) \tag {9} +$$ + +where $\sigma_{i}^{j}$ are independently randomly drawn from $\{-1,1\}$ . + +# A.2 Formal theorem statement + +Theorem 4. Under Assumptions 2, 3, with probability $1 - \delta$ , + +$$ +\begin{array}{l} L (\hat {f}) \leq L (f ^ {*}) + 8 (1 + \epsilon) R _ {n m} (\mathcal {F}) + 8 \sqrt {\frac {(1 + \epsilon) R _ {n m} (\mathcal {F}) \log (1 / \delta)}{n}} \\ + (1 + \epsilon) \sqrt {\frac {(3 2 C \beta^ {2} + \frac {2 (1 + \epsilon) ^ {2}}{m}) \log (1 / \delta)}{n}} + \frac {2 \log (1 / \delta)}{3 n}. \\ \end{array} +$$ + +# A.3 Proof of the main theorem + +We define the population loss on the noisy label $\widetilde{L}(f) = \mathbb{E}_{w,x,\tilde{y}}(f(w,x) - \tilde{y})^2$ . Due to independence of $\xi$ , $\mathbb{E}_{w,x,y,\xi}(f(w,x) - y - \xi)^2 = \mathbb{E}_{w,x,y}(f(w,x) - y)^2 + \mathbb{E}[\xi^2] = L(f) + \mathbb{E}[\xi^2]$ for any $f$ , so $L(\hat{f}) - L(f^*) = \widetilde{L}(\hat{f}) - \widetilde{L}(f^*)$ . We shall focus on bounding the latter. + +We first need a lemma that bounds the supremum of an empirical process indexed by a bounded function class. + +Lemma 5 (Theorem 2.3 of [7]). Assume that $X_{j}$ are identically distributed according to $P$ , $\mathcal{G}$ is a countable set of functions from $\mathcal{X}$ to $\mathbb{R}$ and, and all $g\in \mathcal{G}$ are $P$ -measurable, square-integrable, and satisfy $\mathbb{E}[g] = 0$ . Suppose $\sup_{g\in \mathcal{G}}\| g\|_{\infty}\leq 1$ , and we denote $Z = \sup_{g}\left|\sum_{j = 1}^{n}g(X_{j})\right|$ . Suppose $\sigma^2\geq \sup_{g\in \mathcal{G}}Var(g(X_j))$ almost surely, the for all $t\geq 0$ , we have + +$$ +\Pr \left[ Z \geq \mathbb {E} Z + \sqrt {2 t (n \sigma^ {2} + 2 \mathbb {E} Z)} + \frac {t}{3} \right] \leq e ^ {- t}. +$$ + +We apply Lemma 5 with $X_{j} = (w^{(j)},x_{1}^{j},\ldots ,x_{m}^{j},\tilde{y}_{1}^{j},\ldots ,\tilde{y}_{m}^{j})$ $g(X_{j}) = \left(\frac{1}{m}\sum_{i}(f(w^{(j)},x_{i}^{(j)}) - \tilde{y}_{i}^{(j)})^{2} - \widetilde{L} (f)\right),\sigma^{2} = \sup_{f\in \mathcal{F}}(Var(\frac{1}{m}\sum_{i}(f(w^{(j)},x_{i}^{(j)}) - \tilde{y}_{i}^{(j)})^{2}))$ + +$\dot{t} = \log (1 / \delta)$ . Since $f - \tilde{y}\in [-1,1]$ , $g\in [-1,1]$ . With probability $1 - \delta$ + +$$ +n \sup _ {f} \left| \widehat {L} (f) - \widetilde {L} (f) \right| \leq n \mathbb {E} \sup _ {f} \left| \widehat {L} (f) - \widetilde {L} (f) \right| + \sqrt {2 \log \frac {1}{\delta} \left(n \sigma^ {2} + 2 n \mathbb {E} \sup _ {f} \left| \widehat {L} (f) - \widetilde {L} (f) \right|\right)} + \frac {1}{3} \log \frac {1}{\delta}. +$$ + +Multiplying both sides by $1 / n$ , and using $\sqrt{a + b} \leq \sqrt{a} + \sqrt{b}$ + +$$ +\sup_{f}\Bigl|\widehat{L} (f) - \widetilde{L} (f)\Bigr|\leq \mathbb{E}\sup_{f}\Bigl|\widehat{L} (f) - \widetilde{L} (f)\Bigr| + 2\sqrt{\frac{\mathbb{E}\sup_{f}\Bigl|\widehat{L} (f) - \widetilde{L} (f)\Bigl|\log(1 / \delta)}{n}} +\sqrt{\frac{2\sigma^{2}\log(1 / \delta)}{n}} +\frac{\log(1 / \delta)}{3n}. +$$ + +The next lemma bounds the variance $\sigma^2$ in equation (10). + +Lemma 6. + +$$ +\forall f \in \mathcal {F}, V a r _ {w ^ {(j)}, x _ {1 \ldots m} ^ {j}, \tilde {y} _ {1 \ldots m} ^ {j}} \left[ \frac {1}{m} \sum_ {i = 1} ^ {m} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2} \right] \leq 4 (1 + \epsilon) ^ {2} C \beta^ {2} + \frac {(1 + \epsilon) ^ {4}}{4 m}. +$$ + +Proof of Lemma 6. Using the law of total variance, if we write + +$$ +g (w ^ {(j)}, x _ {1 \ldots m} ^ {j}, \tilde {y} _ {1 \ldots m} ^ {j}) = \frac {1}{m} \sum_ {i = 1} ^ {m} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2}, +$$ + +then + +$$ +\operatorname {V a r} [ g ] = \operatorname {V a r} _ {w} \left[ \mathbb {E} _ {x, \tilde {y}} [ g (w, x, \tilde {y}) \mid w ] \right] + \mathbb {E} _ {h} \left[ \operatorname {V a r} _ {x, \tilde {y}} [ g (w, x, \tilde {y}) \mid w ] \right] \tag {11} +$$ + +To bound the first term of equation (11), we use Poincaré-type inequalities in Assumption 3. For each of the example distributions, we show that they indeed satisfy Assumption 3. + +# Lemma 7. Each of the example distributions in Assumption 3 satisfies a Poincaré-type inequality. + +Proof. + +- When $P_W$ is the uniform distribution over $\mathcal{W} \in \mathbb{R}^e$ , which is open, connected, and bounded with Lipschitz boundary, we use Poincaré-Wirtinger inequality [60] on the smooth function $\mathbb{E}[g \mid w]$ : For some constant $C$ that only depends on $P_W$ , + +$$ +\operatorname {V a r} _ {w} [ \mathbb {E} [ g \mid w ] ] \leq C \mathbb {E} \left[ \| \nabla_ {w} \mathbb {E} [ g \mid w ] \| _ {2} ^ {2} \right]. \tag {12} +$$ + +$C$ is the Poincaré constant for the domain $\mathcal{W}$ in $L_{2}$ norm. It can be bounded by $1 / \lambda_{1}$ where $\lambda_{1}$ is the first eigenvalue of the negative Laplacian of the manifold $\mathcal{W}$ [88]. Many previous works study the optimal Poincaré constants for various domains [45]. For example, when $w$ is uniform over $\mathcal{W}$ which is a bounded, convex, Lipschitz domain with diameter $d$ , $C \leq d / \pi$ [59]. + +We can apply probabilistic Poincaré inequalities over non-Lebesgue measure $P_W$ : + +- When $w \sim \mathcal{N}(\mu, \Sigma)$ , we use the Gaussian Poincaré inequality (see e.g. Theorem 3.20 of [6] and using change of variables), + +$$ +\operatorname {V a r} [ F (w) ] \leq \mathbb {E} [ \langle \Sigma \nabla_ {w} F (w), \nabla_ {w} F (w) \rangle ]. +$$ + +We apply this with $F(w) = \mathbb{E}[g \mid w]$ . Since $\mathbb{E}[v^\top Av] = \mathbb{E}[Tr(v^\top Av)] = \mathbb{E}[Tr(Avv^\top)] = Tr(A\mathbb{E}[vv^\top]) \leq \|A\|_2 \mathbb{E}\left[\|v\|_2^2\right]$ , + +$$ +V a r _ {w} [ \mathbb {E} [ g \mid w ] ] \leq \| \Sigma \| _ {2} \mathbb {E} \left[ \| \nabla_ {w} \mathbb {E} [ g \mid w ] \| _ {2} ^ {2} \right], +$$ + +which satisfies equation (12) with $C = \|\Sigma\|_2$ . + +- When $w \in \mathbb{R}^e$ has independent coordinates $w_1, \ldots, w_e$ and each coordinate has the symmetric exponential distribution $1/2e^{-|t|}$ for $t \in \mathbb{R}$ , we first bound a single dimension using Lemma 4.1 of [47], which says for any function $k \in L^1$ , + +$$ +V a r (k (w _ {i})) \leq 4 \mathbb {E} \left[ k ^ {\prime} (w _ {i}) ^ {2} \right] +$$ + +which, combined with the Efro-Stein inequality (Theorem 3.1 of [6]), + +$$ +V a r (F (w)) = \mathbb {E} \sum_ {i = 1} ^ {e} V a r (F (w) \mid w _ {1}, \dots , w _ {i - 1}, w _ {i + 1}, \dots , w _ {n}), +$$ + +yields: + +$$ +V a r (F (w)) \leq 4 \mathbb {E} \left[ \| F ^ {\prime} (w) \| _ {2} ^ {2} \right] +$$ + +which satisfies equation (12) with $C = 4$ + +Lastly, we consider the case where $P_W$ is a mixture over base distributions satisfying Poincaré inequalities. We first consider the case where the pair-wise chi-squared distances are bounded. Next, we show that mixture of isotropic Gaussians satisfies Poincaré inequality without further condition on pair-wise chi-squared distances. + +- When $\{P_W^q\}_{q \in \mathcal{Q}}$ is a family of distributions, each satisfying Poincaré inequality with constant $C^q$ , and $P_W$ is any mixture over $\{P_W^q\}_{q \in \mathcal{Q}}$ with density $\mu$ , let $K_P(\mu) = \operatorname{ess}_\mu \sup_q C^q$ , which is an upper bound on the base Poincaré constants almost surely, and $K_{\chi^2}^p(\mu) = \mathbb{E}_{q,q' \sim \mu}[(1 + \chi^2(P_W^q||P_W^{q'})^p)]^{1/p}$ , which is an upper bound on the pairwise $\chi^2$ -divergence. Using Theorem 1 of [9] we get that $P_W$ satisfies Poincaré inequality with constant $C$ such that $C \leq K_P(\mu)(p^* + K_{\chi^2}^p(\mu))$ where $p^*$ is the dual exponent of $p$ satisfying $1/p + 1/p^* = 1$ . + +As an example, when base distributions are from the same exponential family and the natural parameter space is affine, such as mixture of Poisson or Multinomial distributions, the pair-wise chi-squared distances are bounded (under some additional conditions) and hence the mixture satisfies Poincaré inequality. More formally, let $p_{\theta}(x) =$ + +$\exp \left(T(x)^{\top}\theta - F(\theta) + k(x)\right)$ where $\theta \in \Theta$ is the natural parameter space and $A(\theta)$ is the log partition function. Lemma 1 in [56] shows that + +$$ +\chi^ {2} \left(p _ {\theta_ {1}} \| p _ {\theta_ {2}}\right) = e ^ {\left(A \left(2 \theta_ {2} - \theta_ {1}\right) - \left(2 A \left(\theta_ {2}\right) - A \left(\theta_ {1}\right)\right)\right)} - 1, +$$ + +which is bounded as long as $2\theta_{2} - \theta_{1}\in \Theta$ . This is satisfied for mixture of 1-D Poisson distributions which can be written as $p(w|\lambda) = \frac{1}{w!}\exp (w\log \lambda -\lambda)$ with natural parameter space $\mathbb{R}$ , and mixture of $e$ -dimensional Multinomial distributions $p(w|\pi) = \exp \left(\langle w,\log \left(\pi /\left(1 - \sum_{i = 1}^{e - 1}\pi_i\right)\right)\rangle +\log \left(1 - \sum_{i = 1}^{e - 1}\pi_i\right)\right)$ with natural parameter space $R^{e - 1}$ . When applied to Gaussian family the natural parameters are + +$$ +\theta_ {q} = \left( \begin{array}{c} \Sigma_ {q} ^ {- 1} \mu_ {q} \\ v e c \left(- \frac {1}{2} \Sigma_ {q} ^ {- 1}\right) \end{array} \right). +$$ + +Since the covariance has to be positive definite matrices, $2\theta_{q} - \theta_{q^{\prime}}$ may not be a set of valid natural parameter. We deal with this in the next case. + +- When $\{P_W^q\}_{q \in \mathcal{Q}}$ is a mixture of isotropic Gaussians, each with mean $\mu_q \in \mathbb{R}^e$ and covariance $\Sigma_q = \sigma_q^2 I_e$ , each satisfying Poincaré inequality with constant $C^q$ (in the single-Gaussian case above we know that $C^q \leq \sigma_q^2$ ), $P_W$ also satisfies Poincaré inequality. We prove this via induction. The key lemma is below: + +Lemma 8 (Corollary 1 of [67]). Suppose measure $p_0$ is absolutely continuous with respect to measure $p_1$ , and $p_0, p_1$ satisfy Poincaré inequality with constants $C_0, C_1$ respectively, then for all $\alpha \in [0,1]$ and $\beta = 1 - \alpha$ , mixture measure $p = \alpha p_0 + \beta p_1$ satisfies Poincaré inequality with $C \leq \max \{C_0, C_1(1 + \alpha \chi_1)\}$ where $\chi_1 = \int \frac{dp_0}{dp_1} dp_0 - 1$ . + +We sort the components in the order of non-decreasing $\sigma_q^2$ , and add in each component one by one. For each new component $i = 2, \ldots, |\mathcal{Q}|$ , we apply the above lemma with $p_0$ being a mixture of $P_W^1, \ldots, P_W^{i-1}$ and $p_1$ being the new component $P_W^i$ . We only need to prove that $\chi_1$ is bounded at every step. Suppose $p_0 = \sum_{j=1}^{i-1} \alpha_j P_W^j$ with $\sum_{j=1}^{i-1} \alpha_j = 1$ , $p_1 = P_W^i$ , and $P_W^j = \frac{1}{(2\pi)^{e/2} \sigma_j^e} \exp \left\{-\frac{1}{2}(w - \mu_j)^\top \Sigma_j^{-1}(w - \mu_j)\right\}$ . Therefore + +$$ +\begin{array}{l} \chi_ {1} + 1 = \int \frac {d p _ {0}}{d p _ {1}} d p _ {0} = \int_ {w} \frac {p _ {0} (w) ^ {2}}{p _ {1} (w)} d w \\ = \int_ {w} \frac {\sum_ {j = 1} ^ {i - 1} \frac {\alpha_ {j} ^ {2}}{\sigma_ {j} ^ {2 e}} \exp \left\{- \frac {\| w - \mu_ {j} \| ^ {2}}{\sigma_ {j} ^ {2}} \right\} + \sum_ {j = 1} ^ {i - 1} \sum_ {j ^ {\prime} \neq j} \frac {2 \alpha_ {j} \alpha_ {j ^ {\prime}}}{\sigma_ {j} ^ {e} \sigma_ {j ^ {\prime}} ^ {e}} \exp \left\{- \frac {\| w - \mu_ {j} \| ^ {2}}{2 \sigma_ {j} ^ {2}} - \frac {\| w - \mu_ {j ^ {\prime}} \| ^ {2}}{2 \sigma_ {j ^ {\prime}} ^ {2}} \right\}}{d w} d w \\ \end{array} +$$ + +The convergence condition of the above integral is $2\sigma_{i}^{2}\geq 2\sigma_{j}^{2}$ for all $j < i$ which is satisfied when $\sigma_i^2\geq \sigma_j^2$ + +![](images/181fddde73794458f210bd19d648beedcce355467531ee3dc88b1f7985e1640f.jpg) + +Next we observe that + +$$ +\nabla_ {w} \mathbb {E} [ g \mid w ] = \nabla_ {w} \int_ {x, \tilde {y}} (f (w, x) - \tilde {y}) ^ {2} p (x, \tilde {y}) d x d \tilde {y} = 2 \int_ {x, y} (f (w, x) - \tilde {y}) \frac {\partial f}{\partial w} p (x, \tilde {y}) d x d \tilde {y} = 2 \mathbb {E} \left[ (f (w, x) - \tilde {y}) \frac {\partial f}{\partial w} \right]. +$$ + +Since $|f(w,x) - \tilde{y} |\leq 1 + \epsilon$ almost surely, $\mathbb{E}\left[\left\| \frac{\partial f}{\partial w}\right\| _2^2\right]\leq \beta^2$ + +$$ +\mathbb {E} _ {h} \left[ \| \nabla_ {w} \mathbb {E} [ g \mid w ] \| _ {2} ^ {2} \right] = 4 \mathbb {E} \left[ \left\| (f (w, x) - y) \frac {\partial f}{\partial w} \right\| _ {2} ^ {2} \right] \leq 4 (1 + \epsilon) ^ {2} \beta^ {2}. +$$ + +Therefore + +$$ +V a r _ {w} [ \mathbb {E} [ g \mid w ] ] \leq C \mathbb {E} \left[ \| \nabla_ {w} \mathbb {E} [ g \mid w ] \| _ {2} ^ {2} \right] \leq 4 (1 + \epsilon) ^ {2} C \beta^ {2}. +$$ + +To bound the second term of equation (11), we use concentration of mean of $m$ i.i.d. random variables. + +Conditioned on $w^{(j)}$ , each of the loss $(f(w^{(j)}, x_i^{(j)}) - \tilde{y}_i^{(j)})^2$ are i.i.d. and bounded in $[0, (1 + \epsilon)^2]$ . Hence each variable has variance upper bound $((1 + \epsilon)^2 - 0)^2 / 4 = (1 + \epsilon)^4 / 4$ and the mean has variance upper bound $(1 + \epsilon)^4 / 4m$ . + +Therefore $Var[g] \leq 4(1 + \epsilon)^2 C\beta^2 + (1 + \epsilon)^4 / 4m$ . + +Proof of Theorem 4. + +$$ +\begin{array}{l} L (\hat {f}) - L (f ^ {*}) \leq 2 \sup _ {f \in \mathcal {F}} | \widetilde {L} (f) - \hat {L} (f) | \\ \leq 2 \mathbb {E} \sup _ {f} | \widetilde {L} (f) - \hat {L} (f) | + 4 \sqrt {\frac {\mathbb {E} \sup _ {f} \left| \hat {L} (f) - \widetilde {L} (f) \right| \log (1 / \delta)}{n}} + \sqrt {\frac {(3 2 (1 + \epsilon) ^ {2} C \beta^ {2} + \frac {2 (1 + \epsilon) ^ {4}}{m}) \log (1 / \delta)}{n}} + \frac {2 \log (1 / \delta)}{3 n} \\ \end{array} +$$ + +by equation (10) and Lemma 6. + +We now show that $\mathbb{E}\sup_f|\widetilde{L} (f) - \hat{L} (f)|\leq 2(1 + \epsilon)R_{nm}(F)$ . This is similar to the argument for classical Rademacher complexity + +$$ +\begin{array}{l} \mathbb {E} _ {w, x, \tilde {y}} \sup _ {f} \left(\frac {1}{n m} \sum_ {i, j} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2} - \mathbb {E} _ {w, x, \tilde {y}} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2}\right) \\ \leq \frac {1}{n m} \mathbb {E} _ {S, S ^ {\prime}} \sup _ {f} \left(\sum_ {i, j} [ (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2} - (f (w ^ {\prime (j)}, x _ {i} ^ {\prime (j)}) - \tilde {y} _ {i} ^ {\prime (j)}) ^ {2} ]\right) \\ = \frac {1}{n m} \mathbb {E} _ {S, S ^ {\prime}, \sigma} \sup _ {f} \left(\sum_ {i, j} [ \sigma_ {i} ^ {j} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2} - \sigma_ {i} ^ {j} (f (w ^ {\prime (j)}, x _ {i} ^ {\prime (j)}) - \tilde {y} _ {i} ^ {\prime (j)}) ^ {2} ]\right) \\ \leq \frac {1}{n m} \mathbb {E} _ {S, \sigma} \sup _ {f} \left(\sum_ {i, j} \sigma_ {i} ^ {j} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2}\right) + \frac {1}{n m} \mathbb {E} _ {S ^ {\prime}, \sigma} \sup _ {f} \left(\sum_ {i, j} \sigma_ {i} ^ {j} (f (w ^ {\prime (j)}, x _ {i} ^ {\prime (j)}) - \tilde {y} _ {i} ^ {\prime (j)}) ^ {2}\right) \\ = 2 R _ {n m} (\widetilde {\mathcal {L}}). \\ \end{array} +$$ + +where the first inequality uses Jensen's inequality and convexity of $sup$ . + +Now we prove the equivalent of Talagrand's contraction lemma to show that $R_{nm}(\tilde{\mathcal{L}}) \leq 2R_{nm}(\mathcal{F})$ . Note that the squared loss is $2(1 + \epsilon)$ -Lipschitz since $\left|\frac{\partial(f - \tilde{y})^2}{\partial f}\right| = 2|f - \tilde{y}| \leq 2(1 + \epsilon)$ . We use the following lemma to prove this: + +Lemma 9 (Lemma 5 of [51]). Suppose $\{\phi_i\}$ , $\{\psi_i\}$ , $i = 1, \dots, N$ are two sets of functions on $\Theta$ such that for each $i$ an $\theta, \theta' \in \Theta$ , $|\phi_i(\theta) - \phi_i(\theta')| \leq |\psi_i(\theta) - \psi_i(\theta')|$ . Then for all functions $c: \Theta \to \mathbb{R}$ , + +$$ +\mathbb {E} _ {\sigma} \left[ \sup _ {\theta} \left\{c (\theta) + \sum_ {i = 1} ^ {N} \sigma_ {i} \phi_ {i} (\theta) \right\} \right] \leq \mathbb {E} _ {\sigma} \left[ \sup _ {\theta} \left\{c (\theta) + \sum_ {i = 1} ^ {N} \sigma_ {i} \psi_ {i} (\theta) \right\} \right] +$$ + +For any set of $w, x$ , we apply Lemma 9 with $\Theta = \mathcal{F}$ , $\theta = f$ , $N = nm$ , $\phi_{ij}(f) = (f(w^{(j)},x_i^{(j)}) - \tilde{y}_i^{(j)})^2$ , $\psi_{ij}(f) = 2(1 + \epsilon)f(w^{(j)},x_i^{(j)})$ , and $c(\theta) = 0$ . Since $\left| (f - \tilde{y})^2 - (f' - \tilde{y})^2 \right| \leq 2(1 + \epsilon)|f - f'|$ , so the condition for Lemma 9 holds. We take expectation over $w, x$ and divide both sides by $nm$ to get + +$$ +\frac {1}{n m} \mathbb {E} _ {w, x, \sigma} \sup _ {f} \sum_ {j = 1} ^ {n} \sum_ {i = 1} ^ {m} \sigma_ {i} ^ {j} (f (w ^ {(j)}, x _ {i} ^ {(j)}) - \tilde {y} _ {i} ^ {(j)}) ^ {2} \leq \frac {2 (1 + \epsilon)}{n m} \mathbb {E} _ {w, x, \sigma} \sup _ {f} \sum_ {j = 1} ^ {n} \sum_ {i = 1} ^ {m} \sigma_ {i} ^ {j} f (w ^ {(j)}, x _ {i} ^ {(j)}) +$$ + +which means $R_{nm}(\mathcal{L})\leq 2(1 + \epsilon)R_{nm}(\mathcal{F})$ . Substituting this into inequality (13) finishes the proof. + +□ + +# A.4 Zero-shot Rademacher complexity bound for the linear hypothesis class + +Consider the linear classifier $F = \{(w_{1}^{\top}w + w_{2}^{\top}x:\| w_{1}\|_{2}\leq B,\| w_{1}\|_{2}\leq C\}$ . Suppose $\| w\| _2\leq 1$ and $\| x\| _2\leq 1$ . + +$$ +\begin{array}{l} R _ {n m} (F) = \frac {1}{n m} \mathbb {E} _ {\sigma , w, x} \sup _ {w} \left\{\langle w _ {1}, \sum_ {i j} \sigma_ {i} ^ {j} w ^ {(j)} \rangle + \langle w _ {2}, \sum_ {i j} \sigma_ {i} ^ {j} x _ {i} ^ {(j)} \rangle \right\} \\ = \frac {1}{n m} \left(B _ {1} \mathbb {E} _ {\sigma , w} \| \sum_ {i j} \sigma_ {i} ^ {j} w ^ {(j)} \| _ {2} + B _ {2} \mathbb {E} _ {\sigma , x} \| \sum_ {i j} \sigma_ {i} ^ {j} x _ {i} ^ {(j)} \| _ {2}\right) \\ \leq \frac {1}{n m} \left(B _ {1} \sqrt {m \sum_ {j} \| w ^ {(j)} \| _ {2} ^ {2}} + B _ {2} \sqrt {\sum_ {i j} \| x _ {i} ^ {(j)} \| _ {2} ^ {2}}\right) \\ = \left(B _ {1} + B _ {2}\right) / \sqrt {n m}. \\ \end{array} +$$ + +We observe that the bound is the same as the standard Rademacher complexity for $nm$ independent samples, which is interesting. The relationship between standard and zero-shot Rademacher complexity for other function classes is an important future direction. + +# B Extended Related Work + +Our approach to zero-shot prediction of intervention effects is related to recent advances in heterogeneous treatment effect (HTE) estimation, zero-shot learning, and meta-learning. + +# B.1 Heterogenous treatment effect (HTE) estimation + +Conditional average treatment effect (CATE) estimation. A number of approaches have been developed to predict the effect of an existing intervention on an individual or subgroup, based on historical data from individuals who received it. This problem is often referred to in the literature as heterogeneous treatment effect (HTE) estimation [28, 13], to denote that the goal is to detect heterogeneities in how individuals respond to an intervention. A more specific instance of HTE estimation, which we focus on here, is conditional average treatment effect (CATE) estimation [81, 44], in which the goal is to predict the effect of a treatment conditioned on an individual's features. A variety of methods and specific models have been developed to achieve this goal [28, 34, 23, 30, 81, 70, 1, 89, 26, 91, 27, 16, 14, 44, 36, 13, 3], and we refer to Bica et al. and Curth et al. for a detailed review of these methods [5, 16]. These methods estimate CATE for an existing intervention, based on historical data from individuals who received it and those that did not. + +While these approaches have a number of useful applications, they do not address CATE for novel interventions which did not exist during training (zero-shot). Our primary contribution is a meta-learning framework to leverage these existing CATE estimators for zero-shot predictions. In the CaML framework (Figure 2), each task corresponds to predicting CATE for a single intervention. We synthesize a task by sampling a natural experiment for each intervention, and then use any existing CATE estimator to generate a noisy target label for our the task (Step 2: estimate pseudo-outcomes). We rely on pseudo-outcome estimates as training labels because prior work has shown that training on observed outcomes directly leads to biased CATE estimates [10, 44, 36], a result which we find holds true in our experiments as well (see T-learner and S-learner w/ meta-learning in Tables 2 and 3). + +Pseudo-outcome estimators. Prior work has developed a variety of methods to estimate CATE pseudo-outcomes, which are noisy but unbiased estimates of CATE, such as the X-learner [44], R-learner [55], DR-learner [36], and RA-learner [16]. Moreover, the outputs of any other CATE estimation method, such as methods which directly estimate CATE via an end-to-end neural network [34, 70, 72] are an equally valid choice of pseudo-outcome. The literature on pseudo-outcome estimation is growing continuously as new estimators are being developed [21, 40]. Typically, these estimators are specific to a single binary intervention, for which a set of nuisance models are trained and used to compute the pseudo-outcomes. As such, applying meta-learning algorithms to these pseudo-outcomes requires synthesizing a natural experiment for each intervention, which corresponds to a single task in the CaML framework. + +Multi-cause estimators. Our methods to address zero-shot CATE estimation for combinations of interventions are distinct from multi-cause estimators for combinations of binary or categorical interventions [83, 61, 65]. Recent work has shown that these methods can predict the effects of new combinations of interventions [50], when every intervention in the combination has been observed at some point during. However, these methods do not estimate CATE for novel interventions which did not exist during training. By contrast, CaML estimates CATE for zero-shot intervention combinations in which none of the interventions in the combo was ever observed during training (Appendix Table C). + +# B.2 Zero-shot learning + +Zero-shot learning (ZSL) has traditionally aimed to reason over new concepts and classes [85, 63] which did not exist during training time. While ZSL has primarily focused on natural language processing and computer vision [82], recent interest has been sparked in generalizing over novel interventions (zero-shot) in the biomedical domain [64, 29] in which data can be cheaply collected for hundreds or thousands of possible interventions [92, 75, 19]. However, general-purpose zero-shot causal methods have been largely unexplored. Notable exceptions include GranITE [25] and SIN [25], which each extend a specific CATE estimation [55, 44] method to incorporate intervention information $(W)$ . However, these approaches have significant drawbacks, which we discuss in Section 2. + +# B.3 Meta-learning + +Meta-learning, or learning to learn, aims to train models which can quickly adapt to new settings and tasks. The key idea is to enable a model to gain experience over multiple learning episodes - in which episodes typically correspond to distinct tasks - to accelerate learning in subsequent learning episodes [32]. The meta-learning literature is rich and spans multiple decades [76, 68, 66, 4], with recent interest focused on model-agnostic methods to train deep learning models to quickly adapt to new tasks [20, 62, 54]. A common focus in the meta-learning literature is few-shot learning, in which a model must adapt to a new task given a small support set of labeled examples. By contrast, we focus on the zero-shot setting, in which no such support set exists. However, we hypothesize that the typical meta-learning problem formulation and training algorithms may also improve zero-shot performance. Thus, CaML's problem formulation and algorithm inspiration from the meta-learning literature, particularly the Reptile algorithm [54] and its application to other tasks in causal inference [71]. Our experimental results show that this meta-learning formulation improves CaML's performance, compared to a standard multi-task learning strategy. + +# C Experimental details + +# C.1 Experimental setup + +Here, we provide more details about the experimental setup for each investigated setting. This serves to complement the high-level overview given in Table 1. Experiments were run using Google Cloud Services. Deep learning-based methods (i.e., CaML and its ablations, S-learner w/ meta-learning, T-learner w/ meta-learning, SIN, GraphITE, FlexTENET, TARNet, and DragonNet) were run on n1-highmem-64 machines with 4x NVIDIA T4 GPU devices. The remaining baselines (RA-learner, R-learner, X-learner, and T-learner) were run on n1-highmem-64 machines featuring 64 CPUs. + +Fair comparison. We perform hyper-parameter optimization with random search for all models, with the meta-testing dataset predetermined and held out. To avoid "hyperparameter hacking", hyperparameters ranges are consistent between methods wherever possible, and were chosen using defaults similar to prior work [35, 25]. Choice of final model hyper-parameters was determined using performance metrics (specific to each dataset) computed on the meta-validation dataset, using the best hyper-parameters over 48 runs (6 servers x 4 NVIDIA T4 GPUs per server x 2 runs per GPU) (Appendix C.4). All table results are computed as the mean across 8 runs of the final model with distinct random seeds. + +# C.1.1 Claims dataset + +Interventions (W): We consider drug prescriptions consisting of either one drug, or two drugs prescribed in combination. We observed 745 unique single drugs, and 22,883 unique drug pairs, excluding interventions which occurred less than 500 times. Time of intervention corresponds to the first day of exposure. To obtain intervention information, we generated pre-trained drug embeddings from a large-scale biomedical knowledge graph [8] (see Appendix C.5). Drugs correspond to nodes in the knowledge graph, which are linked to other nodes (e.g. genes, based on the protein target of the drug). Drug combination embeddings are the sum of the embeddings for their constituent drugs. + +Control group. A challenge in such causal analyses of clinical settings is defining a control group. We randomly sample $5\%$ (1.52M patients) to use as controls, with a 40/20/40 split between meta-train/meta-val/meta-test. When sampling a natural experiment for a given intervention, we select all patients from this control group that did not receive such an intervention. An additional challenge is defining time of intervention for the control group. It is not possible to naively sample a random date, because there are large quiet periods in the claims dataset in which no data is logged. We thus sample a date in which the control patient received a random drug, and thus our measure of CATE estimates the increase in side effect likelihood from the drug(s) $W$ , compared to another drug intervention chosen at random. + +Outcome $(Y)$ : We focus on the side effect pancytopenia: a deficiency across all three blood cell lines (red blood cells, white blood cells, and platelets). Pancytopenia is life-threatening, with a $10-20\%$ mortality rate [38, 43], and is a rare side effect of many common medications [42] (e.g. arthritis and cancer drugs), which in turn require intensive monitoring of the blood work. Our outcome is defined as the (binary) occurrence of pancytopenia within 90 days of intervention exposure. + +Features $(X)$ : Following prior work [24], patient medical history features were constructed by time-binned counts of each unique medical code (diagnosis, procedure, lab result, drug prescription) before the drug was prescribed. In total, 443,940 features were generated from the following time bins: 0-24 hours, 24-48 hours, 2-7 days, 8-30 days, and 31-90 days, 91-365 days, and $365+$ days prior. All individuals in the dataset provided by the insurance company had at least 50 unique days of claims data. + +Metrics: We rely on best practices for evaluating CATE estimators as established established by recent work [86, 11], which recommend to assess treatment rules by comparing subgroups across different quantiles of estimated CATE. We follow the high vs. others RATE (rank-weighted average treatment effect) approach from Yadlowsky et. al [86], which computes the difference in average treatment effect (ATE) of the top $u$ percent of individuals (ranked by predicted CATE), versus all individuals: + +$$ +R A T E @ u = \mathbb {E} \left[ Y (1) - Y (0) \mid F _ {S} (S (X)) \geq 1 - u \right] - \mathbb {E} \left[ Y (1) - Y (0) \right], \tag {14} +$$ + +where $S(\cdot)$ is a priority score which ranks samples lowest to highest predicted CATE, and $F_{S}(\cdot)$ is the cumulative distribution function (CDF) of $S(X_{i})$ . For instance, RATE @ 0.99 would be the difference between the top $1\%$ of the samples (by estimated CATE) vs. the average treatment effect (ATE) across all samples, which we would expect to be high if the CATE estimator is accurate. The real-world use case of our model would be preventing drug prescription a small subset of high-risk individuals. Thus, more specifically, for each task $j$ , intervention $w_{j}$ in the meta-dataset, and meta-model $\Psi_{\theta}$ (our priority score $S(\cdot)$ ), we compute $RATE@u$ for each $u$ in [0.999, 0.998, 0.995, 0.99] across individuals who received the intervention. + +We now summarize how to estimate RATE performance metrics for a single intervention (task). As RATE performance is calculated separately per-intervention we are concerned with a single intervention, we use the simplified notation (i.e. $Y_{i}(1)$ instead of $Y_{i}(w)$ ) from Section 3. Due to the fundamental problem of causal inference (we can only observe $Y_{i}(0)$ or $Y_{i}(1)$ for a given sample), the true RATE, as defined above, cannot be directly observed. + +We follow the method outlined in Section 2.2 and 2.4 of Yadlowsky et. al, [86] in which we compute $\widehat{\Gamma}_i$ , a (noisy but unbiased) estimate for CATE which is in turn used to estimate RATE: + +$$ +\mathbb {E} \left[ \widehat {\Gamma} _ {i} \mid X _ {i} \right] \approx \tau \left(X _ {i}\right) = \mathbb {E} \left[ Y _ {i} (1) - Y _ {i} (0) \mid X _ {i} \right]. \tag {15} +$$ + +Our data is observational, and as such we can estimate $\widehat{\Gamma}_i$ using a direct non-parametric estimator [80]: + +$$ +\widehat {\Gamma} _ {i} = W _ {i} \left(Y _ {i} - \hat {m} \left(X _ {i}, 0\right)\right) + \left(1 - W _ {i}\right) \left(\hat {m} \left(X _ {i}, 1\right) - Y _ {i}\right) \tag {16} +$$ + +$$ +m (x, w) = \mathbb {E} \left[ Y _ {i} (w) \mid X _ {i} = x \right] \tag {17} +$$ + +where $m(x, w)$ is a model that predicts the outcome. Here $\hat{m}(x, w)$ represent nonparametric estimates of $m(x, w)$ , respectively, which we obtain by fitting a cross-fitting a model to the intervention natural experiment over 5-folds. We use random forest models for $\hat{m}(x, w)$ , as they perform well (achieving $\geq 0.90$ ROC AUC across all meta-testing tasks for predicting outcomes) and are robust to choice of hyperparameters. + +RATE can then be estimated via sample-averaging estimator. Specifically, we compute the difference between the average value of $\widehat{\Gamma}_i$ for those in the top $u$ percent of individuals (based on our metamodel's predictions), compared to the average $\widehat{\Gamma}_i$ across all individuals. For further discussion on estimating RATE, we refer readers to [86]. Note that estimates of RATE are unbounded: RATE can be less than 0 (due to predictions inversely relating to CATE). + +Finally, because our meta-testing dataset consists of individuals treated with drugs known in the medical literature to cause pancytopenia (identified by filtering drugs using the side effect database SIDER [42]), observational metrics of recall and precision are also a rough proxy for successful CATE estimation. Thus, as secondary metrics, we also compute Recall @ u and Precision @ u for the same set of thresholds as RATE, where a positive label is defined as occurrence of pancytopenia after intervention. We find that these metrics are highly correlated to RATE in our performance results. + +Training & Evaluation: For each method, we ran a hyperparameter search with 48 random configurations (48 due to running 8 jobs in parallel on 6 servers each) that were drawn uniformly from a pre-defined hyperparameter search space (see Appendix C.4). Methods that can be trained on multiple tasks to then be applied to tasks unseen during training (i.e., CaML and its ablations, S-learner w/ meta-learning, T-learner w/ meta-learning, SIN, GraphITE) were trained for 24 hours (per run) on the meta-training tasks. Model selection was performed on the meta-validation tasks by maximizing the mean RATE@0.998 across meta-validation tasks. Then, the best hyperparameter configuration was used to fit 8 repetition runs across 8 different random seeds. Each repetition model was then tested on the meta-testing tasks, where for all metrics averages across the testing tasks are reported. To make the setting of multi-task models comparable with single-task models that were trained on meta-testing tasks (requiring a train and test split of each meta-testing task), the evaluation of all models was computed on the test split of the meta-testing tasks, respectively. Single-task baselines (FlexTENET, TARNet, and DragonNet, RA-learner, R-learner, X-learner, and T-learner) were given access to the meta-testing tasks during training. Specifically, model selection was performed on the meta-validation tasks, while the best hyperparameter configuration was used to train 8 repetition models (using 8 random seeds) on the train split of each meta-testing task. For the final evaluation, each single-task model that was fit on meta-testing task $i$ was tested on the test split of the same meta-testing task $i$ , and the average metrics were reported across meta-testing tasks. + +# C.1.2 LINCS + +Interventions (W): Interventions in the LINCS dataset consist of a single perturbagen (small molecule). For intervention information, we used the molecular embeddings for each perturbagen using the RDKit featurizer. The same cell line-perturbagen combinations are tested with different perturbagen dosages and times of exposure. [46]. To maintain consistency in experimental conditions while also ensuring that the dataset is sufficiently large for training a model, we filter for most frequently occurring dosage and time of exposure in the dataset, which are $10\mu M$ and 24 hours, respectively. We use data from 10,322 different perturbagens. + +Control group. For each perturbagen (at a given timepoint and dose), we use cell lines which did not receive that intervention as the control group. + +Outcomes $(Y)$ : We measure gene expression across the top-50 and top-20 landmark differentially expressed genes (DEGs) in the LINCS dataset. Accurately predicting in gene expression in these DEGs is most crucial to the drug discovery process. + +Features $(X)$ : We use 19,221 features from the Cancer Cell Line Encyclopedia (CCLE) [22] to describe each cell-line, based on historical gene expression values in a different lab environment. Our dataset consisted of 99 unique cell lines (after filtering for cell-lines with CCLE features). + +Metrics: A key advantage of experiments on cells is that at evaluation time we can observe both $Y(0)$ and $Y(1)$ for the same cell line $X$ , through multiple experiments on clones of the same cell line in controlled lab conditions. In the LINCS dataset, $Y(0)$ is also measured for all cells which received an intervention. Thus, we can directly compute the Precision Estimation of Heterogenous Effects (PEHE) on all treated cells in our meta-testing dataset. PEHE is a standard metric for CATE estimation performance [30], analogous to mean squared error (MSE). + +$$ +P E H E = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\tau_ {i} - \hat {\tau} _ {i}\right) ^ {2} \tag {18} +$$ + +Training & Evaluation: For each method, we ran a hyperparameter search with 48 random configurations (48 due to running 8 jobs in parallel on 6 servers each) that were drawn uniformly from a pre-defined hyperparameter search space (see Appendix C.4). Methods that can be trained on multiple tasks to then be applied to tasks unseen during training (i.e., CaML and its ablations, S-learner w/ meta-learning, T-learner w/ meta-learning, SIN) were trained for 12 hours (per run) on the meta-training tasks. Model selection was performed on the meta-validation tasks by minimizing the overall PEHE for the Top-20 most differentially expressed genes (DEGs) across meta-validation tasks. Then, the best hyperparameter configuration was used to fit 8 repetition runs across 8 different random seeds. Each repetition model was then tested on the meta-testing tasks, where for all metrics averages across the testing tasks are reported. + +Data augmentation: We augment each batch of data during training to also include treated samples that have their pseudo-outcome labels to 0, and their $W$ set to the zero vector. + +# C.2 Selecting holdout interventions for meta-validation and meta-testing + +# C.2.1 Claims. + +In the 30.4 million patient insurance claims dataset, each intervention task in meta-train/meta-val/meta-testing corresponds to a natural experiment of multiple patients, with some interventions (e.g. commonly prescribed drugs) having millions of associated patients who were prescribed the drug. One challenge is that in this setting, there is overlap in subjects between the natural experiments sampled by CaML, which can lead to data leakage between training and testing. For instance, if a patient received Drug 1 (in meta-test) and Drug 2 (meta-train), they would appear in both natural experiments, resulting in data leakage. + +We take a conservative approach and exclude all patients who have ever received a meta-testing drug in their lifespan from the natural experiments for meta-val/meta-train. Similarly, we exclude all patients who received a meta-validation drug from meta-training. + +This approach means we must take great care in selecting meta-testing drugs. Specifically, we must trade off between selecting drugs that are important (covering enough patients) while not diminishing the training dataset size. For instance selecting a commonly prescribed (e.g. aspirin) for meta-testing would deplete our meta-training dataset by over $50\%$ of patients. Thus we only selected meta-test/meta-validation drugs which were prescribed to between 1,000,000 and 100K patients in our dataset, after filtering for only drugs which known to cause Pancytopenia [42] (using the SIDER database). From this subset of drugs, we randomly selected 10 meta-testing drugs and 2 meta-validation drugs, resulting in a total meta-testing/meta-validation pool of 4.1 million patients and 685K patients respectively. + +To evaluate on unseen pairs of drugs on the same hold-out test dataset, we additionally created a second pairs testing dataset from the 5 most frequently occurring combinations from the meta-testing dataset. This allowed us to train a single model on the same meta-train split and evaluate on both single drug and drug pair interventions without occurrence of data leakage. Designing a larger evaluation of pairs was not possible because while pairs of drugs are commonly prescribed as intervention, each particular pair of drugs is a rare event, and accurately evaluating CATE estimation performance (for a rare outcome such as Pancytopenia) requires amassing a natural experiment with at least several thousand patients who received the same intervention. + +# C.2.2 LINCS. + +The goal in selecting holdout interventions for the meta-validation and meta-testing sets was to ensure that they consisted of both cell lines and tasks (small molecules) that had not been seen previously at the time of training (i.e. zero-shot on cell lines and tasks). + +Using a random data splitting approach would result in large portions (up to $50\%$ ) of the data being unused to comply with the zero-shot requirements on cell lines and tasks. One approach to tackle this was to reserve only those tasks in the held-out sets which had been tested on the fewest cell lines. This preserved the maximum amount of data but resulted in an average of just 1 cell line per task in the meta-testing and meta-validation sets, which would not be fair to the non-zero shot baselines. + +To address these issues, we designed a new data split procedure that exploits the structure of how tasks and cell lines are paired. To do so, We first clustered tasks by the cell lines they are tested on. We then identified a set of 600 drugs that had all been tested on a shared set of roughly 20 cell lines. We divided the cell lines and tasks within this set into the meta-validation and meta-testing set, while enforcing zero-shot constraints on both. This resulted in roughly 10 cell lines per intervention in both the meta-validation and meta-testing sets, while still maintaining a reasonably large size of 11 distinct cell lines and 300 distinct tasks in both sets. All remaining tasks and cell lines were reserved for the training set. (See Table 8) + +# C.3 Understanding CaML's performance + +Our comparison to CATE estimators which are restricted to single interventions (Grey, Table 2,5) shows that a key reason for CaML's strong performance is the ability to jointly learn across from many intervention datasets, in order to generalize to unseen intervention. + +Additionally, in both the Claims and LINCS settings, we conduct two key ablation studies to further understand the underlying reason for CaML's strong performance results. + +In our first ablation experiment (w/o meta-learning), we trained the CaML model without employing meta-learning, instead using the standard empirical risk minimization (ERM) technique [78]. This can be seen as a specific implementation of the CaML algorithm (refer to Algorithm 1) when $k = 1$ [54]. The results of this experiment showed a varying degree of performance deterioration across our primary tests. In the Claims settings, we observed a decrease in the RATE performance metric by $15\% -22\%$ (refer to Table 2), while in the LINCS settings, the PEHE performance metric decreased by approximately 0.01 (see Table 3). These results indicate that the absence of meta-learning affects the model's performance, although the impact varies depending on the specific setting. An important detail to consider is that the Claims data experiments dealt with substantially larger datasets, each comprising hundreds of thousands of patients per intervention. This extensive scale of data potentially amplifies the benefits of using meta-learning in the CaML model for the Claims dataset. The larger dataset enables the model to adapt to a given task over a larger set of iterations without reusing the same data, thereby enhancing the efficacy of meta-learning. + +Our second ablation (w/o RA-learner) assesses the sensitivity of CaML's performance to different pseudo-outcome estimation strategies. A key aspect of CaML is flexibility in choice of any pseudo-outcome estimator to infer CATE, in contrast to prior work which uses specific CATE estimation strategies [25, 35]. We find that CaML performance benefits strongly from flexibility of pseudo-outcome estimator choice. We assess this by using an alternative pseudo-outcome estimator. Firstly, we find that this ablation results in much noisier model training. For instance, the standard deviation in RATE across the 8 random seeds increases by $20\times$ when using the alternative pseudo-outcome estimator in the claims setting. Moreover, the alternative pseudo-outcome estimator typically worsens performance, decreasing RATE by up to $6\%$ in the Claims setting, and increasing PEHE by $20\% -21\%$ in the LINCS setting (Table 3). We note that this ablation performs slightly better at the 0.99 threshold, which may be a result of the high variance in this ablation. Specific choice of alternative pseudo-outcome estimator for this ablation varies by setting. We use the R-learner [55] for Claims as it also achieves strong single task performance (Table 2, grey) on Claims data. However, R-learner is restricted to single-dimensional outcomes, and thus for LINCS (in which outcomes are 50 and 20 dimensional), we use the PW-learner instead [16]. + +# C.4 Hyperparameter space + +# C.4.1 Claims dataset hyperparameter space + +We list the hyperparameter search spaces for the medical claims dataset in the following tables. Table 9 represents the search space for CaML. The SIN baseline consists of two stages, Stage 1 and Stage 2. For the Stage 1 model, we searched the identical hyperparameter search space as for CaML (Table 9). For Stage 2, we used the hyperparameters displayed in Table 10. The search space for the GraphITE baseline is displayed in Table 11. For the S-learner and T-learner w/ meta-learning baselines, we use the same hyperparameter space as for CaML (Table 9) with the only major difference that the these baselines predicts the outcome $Y$ instead of $\hat{\tau}$ . For all deep learning-based methods, we employed a batch size of 8,192, except for GraphITE, where we were restricted to using a batch size of 512 due to larger memory requirements. Single-task neural network baselines (FlexTENet, TARNet, and DragonNet) are shown in Tables 12,13, and 14, respectively. For the remaining baselines, i.e., the model-agnostic CATE estimators, the (shared) hyperparameter search space is shown in Table 15. Finally, applied L1 regularization to the encoder layer of the customizable neural network models (that were not reused as external packages), i.e., SIN learner, GraphITE, T-learner w/ meta-learning, and S-learner w/ meta-learning, and CaML. + +# C.4.2 LINCS hyperparameter space + +We list the hyperparameter search spaces for LINCS in the following tables. CaMLis shown in Table 16. SIN Stage 1 used the same search space as CaML (Table 16. The search space of SIN Stage 2 is shown in Table 17. S learner and T-learner w/ meta-learning used the same search space as CaML. The search space of GraphITE is shown in Table 18. All methods that were applied to LINCS used a batch size of 20. + +# C.5 More details on intervention information + +Here we give more details about the intervention information used for the medical claims dataset. In order to perform zero-shot generalization, we acquired information about a specific intervention through the use of pretrained embeddings. We generated these embeddings on the Precision Medicine Knowledge Graph [8] that contains drug nodes as well as 9 other node types. We extracted embeddings for 7957 drugs from the knowledge graph. + +To extract rich neighborhood information from the knowledge graph we used Stargraph [49], which is a coarse-to-fine representation learning algorithm. StarGraph generates a subgraph for each node by sampling from its neighbor nodes (all nodes in the one-hop neighborhood) and anchor nodes (a preselected subset of nodes appearing in the multihop neighborhood). In our case the anchor nodes were the $2\%$ of graph nodes with the highest degree. For the scoring function we used the augmented version of TripleRE [90] presented in the StarGraph article [49]. + +We performed a hyperparameter optimization to compare different models and determine the one we used to calculate our final embeddings (see Table C.5). The hyperparameter search was random with the objective of minimizing the loss function used in training on held out data. The search range for each of the parameters is displayed in C.5. Since certain parameters did not seem to influence the final score as much we decided to use them as constants and focus on optimizing the hyperparameters in the table. Therefore the number of sampled anchors was set to 20 and $u = 0.1$ in the augmented TripleRE function, the values matching those seen in Stargraph [48]. + +Our final embeddings were 256-dimensional, the learning rate was 2e-4, the drop-ratio was 5e-3. We used the self-adversarial negative sampling loss with $\gamma = 8$ and we sampled 4 neighbor nodes for each subgraph. + +To additionally evaluate the quality of the embeddings we assigned classes to drug combinations and then scored them using multiple clustering metrics. We were interested to see if embeddings of drug combinations used for similar purposes would be embedded closer together than other drug combinations. For the class label of single drugs we used the first level of the Anatomical Therapeutic Chemical (ATC) code, which represents one of the 14 anatomical or pharmacological groups. Since certain medications have more than one ATC code, we took the mode of all labels for a specific drug. For multiple drugs we combined all distinct first level values and took the mode of them as the label. We used the Silhouette metric, Calinski Harabasz index and Davies Bouldin index as + +well as the average classification accuracy over 10 runs of training a random forest classifier on a random sample of $80\%$ of the dataset and evaluating on the remaining $20\%$ . Out of all tested embeddings the hyperparameter optimized StarGraph embeddings performed best (exceeding $93\%$ in the classification accuracy metric). + +# C.6 Pseudo-outcome estimation + +In our experiments, we estimate pseudo-outcomes $\tilde{\tau}$ for a given intervention $w$ using the RA-learner [16]: + +$$ +\tilde {\tau} = W \left(Y - \hat {\mu} _ {0} (X)\right) + (1 - W) \left(\hat {\mu} _ {1} (X) - Y\right) \tag {19} +$$ + +where $\hat{\mu}_w$ is an estimate of $\mu_w(X) = \mathbb{E}_{\mathcal{P}}\left[Y \mid X = x, W = w\right]$ . + +Furthermore, in both settings we only estimate CATE for treated individuals. We focus on treated individuals in the Claims setting because we care about the risk of an adverse event for prescribing a sick patients drugs that may cure their sickness, not the adverse event risk of prescribing healthy patients drugs (which is of less clinical interest). In the LINCS setting, we focus on treated cells as for these cell-lines $Y(0)$ is also measured from a cloned cell-line under similar laboratory conditions, which allows us to directly estimate CATE prediction performance using the PEHE metric. As we focus on treated samples, the RA-learner can be simplified to $\tilde{\tau} = Y - \hat{\mu}_0(X)$ . We estimate $\hat{\mu}_0(X)$ using a random forest model in the Claims setting, whereas in the LINCS setting we use the point estimate from the untreated control cell line's gene expression. + +# C.7 Baselines + +Here we provide more details on the baselines used in our experiments. + +Trained on test task: These baselines leverage CATE estimators which can only be trained on a single task (typically these are the strongest baselines, when there is a large enough dataset for a single task). Thus, we train a single model for each meta-testing task on its train split, and evaluate performance on its test split. We use a number of strong baselines for CATE estimation developed by prior work including both model-agnostic and end-to-end deep learning approaches: T-learner. Specifically, we use the model-agnostic CATE estimators: [44], X-learner [44], RA-learner [16], R-learner [55]. We additionally use the end-to-end deep learning estimators DragonNet [72], TARNet [70], and FlexTENet [17], using implementations from [17]. For model-agnostic CATE estimators, we use random forest models following prior work [14, 81]. + +Zero-shot. These baselines use CATE estimators which incorporate intervention information $(W)$ and are capable of multi-task learning. We train these baselines on all meta-training tasks. These baselines have no access to the meta-testing tasks during training. We found in preliminary experiments that in some cases, baseline models trained with vanilla ERM would not even converge. To allow for fair comparison to baselines, we allow for all zero-shot baselines to be trained using Reptile (by training using the same optimization strategy as Algorithm 1, while allowing for training with ERM by including $k = 1$ in the hyperparameter search space). + +Firstly, we use GraphITE [25] and Structured Intervention Networks [35]. These are, to the best of our knowledge, the only methods from prior work which are (in principle) capable of zero-shot generalization. We use existing implementations provided by the authors [35]. + +Additionally, we implement two strong baselines which estimate CATE by modeling $Y(w)$ and $Y(0)$ , rather than via pseudo-outcomes. These are variants of the S-learner and T-learner [44] with meta-learning, which use the intervention information as input, rather than one-hot encoded vectors of the different interventions—such that they also have zero-shot capability. Specifically, we train MLPs using the same architecture as CaML to estimate the response function from observed outcomes: + +$$ +\mu (x, w) = \mathbb {E} _ {\mathcal {P}} \left[ Y \mid X = x, W = w \right] \tag {20} +$$ + +and estimate CATE by + +$$ +\hat {\tau} _ {w} (x) = \hat {\mu} (x, w) - \hat {\mu} (x, \mathbf {0}) \tag {21} +$$ + +Where $w$ denotes the corresponding intervention information $w$ for an intervention, and $\mathbf{0}$ denotes a null intervention vector. In the LINCS setting, we represent $\mathbf{0}$ as a vector of zeros, whereas in the + +Claims setting we represent $\mathbf{0}$ as the mean embedding of all drugs (as the estimand is the increase in adverse event likelihood compared to a randomly chosen drug). The difference between the T-learner and the S-learner is that the T-learner estimates two models, one for control units and one for treated units. By contrast, the S-learner estimates a shared model across all units. + +# D Additional Experiments + +In general, limited training examples, or biases in the training data, will degrade model performance—and the CaML algorithm is no exception in this regard. For instance, if there are too few examples of prior interventions (e.g., only a handful of training drugs), then zero-shot generalization may become more challenging. Therefore, it is important to study the robustness of CaML's performance to limitations in the training dataset. To this end, we conduct additional experiments in which we downsample the number of training interventions. As expected, we find that: (1) zero-shot capabilities improve as the set of unique training interventions increases in size and (2) we can still achieve strong performance on smaller datasets (e.g., runs with $60\%$ and $80\%$ , of the interventions achieve similar performance). + +![](images/86c2ccfa6a1f1743729dce0a429053fc04727406b705af266a0eae389905d955.jpg) +Effect of Decreasing Training Intervention Set Size (Claims - RATE @ 0.998) + +![](images/f13c1ab06addfa6e4298d5c41a2e33b74d18a30fa0218e6abd11733a8f6aa732.jpg) +Effect of Decreasing Training Intervention Set Size (LINCS - PEHE 20 DEGS) +Figure 3: Measuring the robustness of CaML to limitations in the training intervention data. We downsample the number of training interventions and measure CaML's performance. Overall, we find that CaML's zero-shot capabilities improve as the set of unique training interventions increases in size. Nevertheless, CaML still achieves strong performance on smaller datasets (e.g., runs with $60\%$ and $80\%$ , of the interventions achieve similar performance). Results are analogous for other metrics on both datasets. Top: Performance on the Claims dataset at predicting the effect on a novel drug on the likelihood of Pancytopenia onset (RATE @ 0.998). Bottom: Performance on the LINCS dataset at predicting the gene expression of the Top 20 and Top 50 most differentially expressed genes (DEGs). + +# E Unbiasedness of CATE estimates + +# Unbiasedness of Pseudo-outcome labels + +We show for an example pseudo-outcome label, the RA-learner [16], that the estimated pseudo-outcome labels are unbiased estimates of the true CATE, i.e.: + +$$ +\mathbb {E} \left[ \tilde {\tau} | X = x \right] = \tau (x) = \mathbb {E} \left[ Y (1) - Y (0) | X = x \right] +$$ + +The pseudo-outcome $\tilde{\tau}$ for the RA-learner is defined as $\tilde{\tau}_i = Y_i - \hat{\mu}_0(X_i)$ for treated units $(W_{i} = 1)$ , and $\tilde{\tau}_i = \hat{\mu}_1(X_i) - Y_i$ for control units $(W_{i} = 0)$ . + +Here, $\hat{\mu}_0(X),\hat{\mu}_1(X)$ denote unbiased and correctly specified nuisance models for the outcomes $Y(0)$ and $Y(1)$ respectively. In other words, $\mathbb{E}[\hat{\mu}_0(x)] = \mu_0(x) = \mathbb{E}[Y(0)|X = x]$ and $\mathbb{E}[\hat{\mu}_1(x)] = \mu_1(x) = \mathbb{E}[Y(1)|X = x]$ . + +We consider the treated and control units separately. For treated units $(W_{i} = 1)$ , we have: + +$$ +\tilde {\tau} _ {i} = Y _ {i} - \hat {\mu} _ {0} (X _ {i}). +$$ + +Hence, their expectation, conditioned on covariates $X$ can be written as: + +$$ +\mathbb {E} \left[ \tilde {\tau} | X = x \right] = \mathbb {E} \left[ Y - \hat {\mu} _ {0} (X) | X = x \right] = \mathbb {E} \left[ Y | X = x \right] - \mathbb {E} \left[ \hat {\mu} _ {0} (X) | X = x \right] = \mathbb {E} \left[ Y | X = x \right] - \mathbb {E} \left[ Y (0) | X = x \right] = \tau (x), +$$ + +which by applying the consistency assumption (paper Line 98) for treated units is equivalent to: + +$$ +\mathbb {E} \left[ Y (1) | X = x \right] - \mathbb {E} \left[ Y (0) | X = x \right] = \mathbb {E} \left[ Y (1) - Y (0) | X = x \right] = \tau (x). +$$ + +Analogously, we can make the same argument for control units $(W = 0)$ . Here, the pseudo-outcome is computed as: + +$$ +\tilde {\tau} _ {i} = \hat {\mu} _ {1} (X _ {i}) - Y _ {i}. +$$ + +Hence, we have + +$$ +\mathbb {E} \left[ \tilde {\tau} | X = x \right] = \mathbb {E} \left[ \hat {\mu} _ {1} (X) - Y | X = x \right] = \mathbb {E} \left[ \hat {\mu} _ {1} (X) | X = x \right] - \mathbb {E} \left[ Y | X = x \right], +$$ + +which under consistency (for control units) is equivalent to: + +$$ +\mathbb {E} \left[ Y (1) | X = x \right] - \mathbb {E} \left[ Y (0) | X = x \right] = \mathbb {E} \left[ Y (1) - Y (0) | X = x \right] = \tau (x). +$$ + +# Unbiasedness of Model Loss + +We consider parametrized CATE estimators $\Psi_{\theta}\colon \mathbb{R}^{e}\times \mathbb{R}^{d}\to \mathbb{R}$ that take as input intervention information $w\in \mathbb{R}^e$ (e.g., a drug's attributes) and individual features $x\in \mathbb{R}^d$ (e.g., patient medical history) to return a scalar for the estimated CATE (e.g., the effect of the drug on patient health). + +We denote the loss function $L$ with regard to a CATE estimator $\Psi$ and a target $y$ as: + +$$ +L (\Psi , y) = \left(\Psi (w, x) - y\right) ^ {2} +$$ + +As in Theorem 1, we assume pseudo-outcomes targets $\tilde{\tau}$ during training satisfy $\tilde{\tau} = \tau +\xi$ where $\tau$ is the true (unobserved) CATE and $\xi$ is an independent zero-mean noise. + +Lemma 10. Given two different CATE estimators $\hat{\Psi}_{\theta_1}, \hat{\Psi}_{\theta_2}$ , parameterized by $\theta_1$ and $\theta_2$ : + +$$ +\mathbb {E} \left[ L (\hat {\Psi} _ {\theta_ {1}}, \tilde {\tau}) \right] \leq \mathbb {E} \left[ L (\hat {\Psi} _ {\theta_ {2}}, \tilde {\tau}) \right] \Rightarrow \mathbb {E} \left[ L (\hat {\Psi} _ {\theta_ {1}}, \tau) \right] \leq \mathbb {E} \left[ L (\hat {\Psi} _ {\theta_ {2}}, \tau) \right] +$$ + +Proof. We follow a similar argument as Tripuraneni et al. [77]. + +$$ +\begin{array}{l} \mathbb {E} \left[ L (\hat {\Psi} _ {\theta}, \tilde {\tau}) \right] = \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta} (w, x) - \tilde {\tau}\right) ^ {2} \right] = \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta} (w, x) - \tau + \tau - \tilde {\tau}\right) ^ {2} \right] \\ = \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta} (w, x) - \tau\right) ^ {2} + (\tau - \tilde {\tau}) ^ {2} + 2 \left(\hat {\Psi} _ {\theta} (w, x) - \tau\right) (\tau - \tilde {\tau}) \right] \\ = \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta} (w, x) - \tau\right) ^ {2} \right] + \mathbb {E} \left[ (\tau - \tilde {\tau}) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta} (w, x) - \tau\right) (\tau - \tilde {\tau}) \right]. \\ \end{array} +$$ + +Now we subtract the loss for the two models parameterized by $\theta_{1}$ and $\theta_{2}$ : + +$$ +\begin{array}{l} \mathbb {E} \left[ L \left(\hat {\Psi} _ {\theta_ {1}}, \tilde {\tau}\right) \right] - \mathbb {E} \left[ L \left(\hat {\Psi} _ {\theta_ {2}}, \tilde {\tau}\right) \right] = \tag {1} \\ \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] + \mathbb {E} \left[ (\tau - \tilde {\tau}) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) (\tau - \tilde {\tau}) \right] - \\ \left(\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] + \mathbb {E} \left[ (\tau - \tilde {\tau}) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) (\tau - \tilde {\tau}) \right]\right) = \\ \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) (\tau - \tilde {\tau}) \right] - \\ \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] - 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) (\tau - \tilde {\tau}) \right] = \\ \end{array} +$$ + +Expanding out the righthand terms give us: + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {1}} (w, x) \cdot \tau - \hat {\Psi} _ {\theta_ {1}} (w, x) \cdot \tilde {\tau} - \tau^ {2} + \tau \cdot \tilde {\tau} \right] - +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] - 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {2}} (w, x) \cdot \tau - \hat {\Psi} _ {\theta_ {2}} (w, x) \cdot \tilde {\tau} - \tau^ {2} + \tau \cdot \tilde {\tau} \right] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {1}} (w, x) \cdot \tau - \hat {\Psi} _ {\theta_ {1}} (w, x) \cdot \tilde {\tau} \right] - \underbrace {\mathbb {E} [ \tau^ {2} ]} + \underbrace {\mathbb {E} [ \tau \cdot \tilde {\tau} ]} - +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] - 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {2}} (w, x) \cdot \tau - \hat {\Psi} _ {\theta_ {2}} (w, x) \cdot \tilde {\tau} \right] + \mathbb {E} \left[ \tau^ {2} \right] - \mathbb {E} [ \tau \cdot \tilde {\tau} ] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {1}} (w, x) \cdot \tau - \hat {\Psi} _ {\theta_ {1}} (w, x) \cdot \tilde {\tau} \right] - +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] - 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {2}} (w, x) \cdot \tau - \hat {\Psi} _ {\theta_ {2}} (w, x) \cdot \tilde {\tau} \right] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] - \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) \cdot (\tau - \tilde {\tau})\right) - \left(\hat {\Psi} _ {\theta_ {2}} (w, x) \cdot (\tau - \tilde {\tau})\right) \right] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] - \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \hat {\Psi} _ {\theta_ {2}} (w, x)\right) \cdot \underbrace {(\tau - \tilde {\tau})} _ {- \xi} \right] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] - \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \hat {\Psi} _ {\theta_ {2}} (w, x)\right) \cdot - \xi \right] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] - \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {1}} (w, x) - \hat {\Psi} _ {\theta_ {2}} (w, x) \right] \mathbb {E} [ - \xi ] = +$$ + +$$ +\mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {1}} (w, x) - \tau\right) ^ {2} \right] - \mathbb {E} \left[ \left(\hat {\Psi} _ {\theta_ {2}} (w, x) - \tau\right) ^ {2} \right] + 2 \mathbb {E} \left[ \hat {\Psi} _ {\theta_ {1}} (w, x) - \hat {\Psi} _ {\theta_ {2}} (w, x) \right] \underbrace {\mathbb {E} [ - \xi ]} _ {0} = +$$ + +$$ +\mathbb {E} \left[ L (\hat {\Psi} _ {\theta_ {1}}, \tau) \right] - \mathbb {E} \left[ L (\hat {\Psi} _ {\theta_ {2}}, \tau) \right], +$$ + +from which—due to equality with Equation 1—the claim follows. + +![](images/1ba6eca1d516478fa8e805c067d82aa69aad5f5aa76e2f0c00642c20fb4ba7ea.jpg) + +
RATE @u(↑)Recall @u(↑)Precision @u(↑)
0.999.9980.9950.990.9990.9980.9950.990.9990.9980.9950.99
Random0.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.01±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.001
T-learner0.32±<0.0010.26±<0.0010.16±<0.0010.10±<0.0010.12±<0.0010.18±<0.0010.26±<0.0010.31±<0.0010.36±<0.0010.29±<0.0010.18±<0.0010.11±<0.001
X-learner0.06±<0.0010.05±<0.0010.04±<0.0010.03±<0.0010.02±<0.0010.04±<0.0010.08±<0.0010.12±<0.0010.09±<0.0010.07±<0.0010.06±<0.0010.05±<0.001
R-learner0.19±<0.0010.17±<0.0010.12±<0.0010.08±<0.0010.06±<0.0010.10±<0.0010.19±<0.0010.26±<0.0010.24±<0.0010.21±<0.0010.15±<0.0010.11±<0.001
RA-learner0.47±0.0010.37±<0.0010.23±<0.0010.14±<0.0010.17±<0.0010.26±<0.0010.38±<0.0010.45±<0.0010.54±0.0010.42±<0.0010.26±<0.0010.16±<0.001
DragonNet0.09±0.0370.07±0.0300.05±0.0190.04±0.0130.02±0.0080.04±0.0120.07±0.0200.10±0.0270.12±0.0450.10±0.0360.07±0.0230.05±0.015
TARNet0.15±0.0110.12±0.0110.07±0.0060.05±0.0040.05±0.0030.08±0.0060.12±0.0080.14±0.0110.18±0.0130.15±0.0120.09±0.0070.06±0.004
FlexTENet0.10±0.0150.09±0.0160.06±0.0080.04±0.0060.04±0.0060.07±0.0090.12±0.0110.17±0.0170.12±0.0180.11±0.0190.08±0.0100.06±0.007
GraphITE0.19±0.0240.12±0.0130.05±0.0040.03±0.0020.07±0.0090.08±0.0100.09±0.0080.10±0.0080.23±0.0270.14±0.0150.07±0.0050.04±0.003
SIN0.00±0.0020.00±0.0010.00±0.0010.00±0.0010.00±0.0010.00±0.0010.01±0.0010.02±0.0020.01±0.0020.01±0.0010.01±0.0010.01±0.001
S-learner w/ meta-learning0.21±0.0320.16±0.0280.09±0.0200.05±0.0120.08±0.0130.11±0.0220.15±0.0350.16±0.0380.25±0.0340.18±0.0310.10±0.0230.06±0.014
T-learner w/ meta-learning0.40±0.0120.31±0.0100.18±0.0070.11±0.0040.15±0.0060.22±0.0080.32±0.0130.38±0.0140.45±0.0130.35±0.0110.21±0.0080.13±0.004
CaML - w/o meta-learning0.39±0.0120.31±0.0060.18±0.0080.11±0.0060.15±0.0050.22±0.0070.32±0.0140.39±0.0210.45±0.0100.35±0.0060.22±0.0080.14±0.006
CaML - w/o RA-learner0.45±0.0580.36±0.0660.22±0.0670.14±0.0410.16±0.0200.24±0.0190.35±0.0160.41±0.0230.51±0.0760.41±0.0820.26±0.0780.16±0.048
CaML (ours)0.48±0.0100.38±0.0070.23±0.0030.13±0.0020.18±0.0040.27±0.0050.38±0.0060.45±0.0100.54±0.0120.43±0.0080.26±0.0040.16±0.003
+ +Table 4: Performance results for the Claims dataset (predicting pancytopenia onset from drug exposure using patient medical history. This table extends Table 2 with standard deviations. + +
RATE @u(↑)Recall @u(↑)Precision @u(↑)
0.999.9980.9950.990.9990.9980.9950.990.9990.9980.9950.99
Random0.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.0±<0.0010.0±<0.0010.01±<0.0010.01±<0.0010.01±<0.0010.01±<0.0010.01±<0.0010.00±<0.001
T-learner0.10±<0.0010.07±<0.0010.05±<0.0010.04±<0.0010.05±<0.0010.07±<0.0010.11±<0.0010.13±<0.0010.10±<0.0010.08±<0.0010.06±<0.0010.04±<0.001
X-learner0.00±<0.001-0.01±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.01±<0.0010.02±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.01±<0.001
R-learner-0.01±<0.001-0.01±<0.001-0.01±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.04±<0.0010.00±<0.0010.00±<0.0010.00±<0.0010.01±<0.001
RA-learner0.28±<0.0010.26±<0.0010.17±<0.0010.10±<0.0010.10±<0.0010.19±<0.0010.30±<0.0010.37±<0.0010.30±<0.0010.28±<0.0010.18±<0.0010.11±<0.001
DragonNet-0.01±0.0020.00±0.0090.00±0.0040.00±0.0030.00±<0.0010.00±0.0030.00±0.0050.01±0.0090.00±<0.0010.00±0.0100.00±0.0040.00±0.003
TARNet0.04±0.0460.03±0.0300.02±0.0130.02±0.0120.01±0.0110.02±0.0150.04±0.0130.06±0.0290.05±0.0460.04±0.0320.03±0.0130.02±0.012
FlexTENet0.02±0.0240.02±0.0190.04±0.0120.03±0.0130.01±0.0090.03±0.0180.08±0.0120.12±0.0370.02±0.0270.03±0.0200.04±0.0120.04±0.014
S-learner w/ meta-learning0.27±0.1730.16±0.1180.08±0.0520.04±0.0300.09±0.0550.10±0.0700.13±0.0840.15±0.0900.29±0.1800.18±0.1230.09±0.0550.05±0.032
T-learner w/ meta-learning0.27±0.1730.16±0.1180.08±0.0520.04±0.0300.09±0.0550.10±0.0700.13±0.0840.15±0.0900.29 ± 0.1800.18±0.1230.09±0.0550.05±0.032
GraphITE0.25±0.0880.15±0.0540.06±0.0250.03±0.0110.08±0.0240.10±0.0340.11±0.0450.13±0.0490.27±0.0910.16±0.0570.07±0.0270.04±0.013
SIN0.00±0.0080.00±0.0140.00±0.0080.00±0.0050.00±0.0050.00±0.0080.02±0.0150.03±0.0090.00±0.0070.01±0.0140.01±0.0090.01±0.005
CaML - w/o meta-learning0.45±0.0700.38±0.0570.21±0.0170.13±0.0080.19±0.0190.28±0.0260.38±0.0250.45±0.0190.49±0.0700.41±0.0570.23±0.0170.15±0.008
CaML - w/o RA-learner0.40±0.1010.33±0.0340.24±0.0140.15±0.0100.18±0.0250.28±0.0100.42±0.0240.50±0.0280.44±0.0990.36±0.0330.26±0.0140.17±0.010
CaML (ours)0.47±0.0840.37±0.0440.23±0.0220.15±0.0130.20±0.0150.30±0.0160.43±0.0240.51±0.0270.51±0.0790.40±0.0440.25±0.0230.17±0.013
+ +Table 5: Performance results for the medical claims dataset, in which the task is to predict the effect of a pair of drugs the drug on pancytopenia occurrence. Mean and standard deviation between runs is reported. Single-task methods were trained on the meta-testing tasks (best model underlined). Methods that were capable of training across multiple tasks were trained on meta-training tasks and applied to previously unseen meta-testing tasks (best model in bold). CaML outperforms the strongest baseline that had access to testing tasks on 12 out of 12 metrics, and outperforms all zero-shot baselines. Notably, due to the small sample size for natural experiments with combinations of drugs, the RATE estimation process is very noisy which is reflected in high variability of the measured RATE. Here, the secondary metrics (Recall and Precision) that are not affected, additionally assert the dominance of CaML over all baselines. + +
Split# of Patients
AllopurinolTest815,921
PregabalinTest636,995
MirtazapineTest623,980
IndomethacinTest560,380
ColchicineTest370,397
HydralazineTest363,070
HydroxychloroquineTest324,750
MethotrexateTest323,387
MemantineTest306,832
FentanylTest261,000
EtodolacVal438,854
AzathioprineVal100,000
+ +Table 6: Held-out test and validation drugs for our single-drug meta-testing and meta-validation datasets for our Claims evaluation in Table 2. Drugs are unseen (excluded) during training. All drugs are known to cause pancytopenia [42] + +
Split# of Patients
Allopurinol + HydralazineTest7,859
Methotrexate + HydroxychloroquineTest25,716
Pregabalin + FentanylTest5,424
Indomethacin + ColchicineTest42,846
Mirtazapine + MemantineTest10,215
+ +Table 7: Held-out test pairs of drugs for our meta-testing and meta-validation datasets in Appendix Table 5. Both drugs are unseen (excluded) during training. All drugs are known to cause pancytopenia [42] + +
Split# Perturbagens# Cell-LinesMean #Cell Lines/Task
Meta-training9717775.79
Meta-validation304119.99
Meta-testing3011110.77
+ +Table 8: Composition of the meta-training, meta-validation and meta-testing sets for the LINCS dataset. No cell lines or drugs (tasks) were shared across any of the splits. + +
HyperparameterSearch range
Num. of layers{2,4,6}
Dim. of hidden layers{128,256}
Dropout{0,0.1}
Learning rate{3 × 10-3,1 × 10-3,3 × 10-4,1 × 10-4}
Meta learning rate{1}
Weight decay{5 × 10-3}
Reptile k{1,10,50}
L1 regularization coefficient{0,1 × 10-7,5 × 10-7}
+ +Table 9: Hyperparameter search space for CaML (our proposed method) on the medical claims dataset. + +
HyperparameterSearch range
Num. of como layers{2,4,6}
Num. of covariate layers{2,4,6}
Num. of propensity layers{2,4,6}
Num. of treatment layers{2,4,6}
Dim. of hidden como layers{128,256}
Dim. of hidden covariate layers{128,256}
Dim. of hidden treatment layers{128,256}
Dim. of hidden propensity layers{16,32,64,128}
Dropout{0,0.1}
Learning rate{3×10-3,1×10-3,3×10-4,1×10-4}
Meta learning rate{1}
Sin Weight decay{0,5×10-3}
Pro Weight decay{0,5×10-3}
GNN Weight decay{0,5×10-3}
Reptile k{1,10,50}
L1 regularization coefficient{0,1×10-7,5×10-7}
+ +Table 10: Hyperparameter search space for SIN on the medical claims dataset. The SIN model consists of two stages, Stage 1 and Stage 2. For the Stage 1 model we searched the identical hyperparameter search space as for CaML (Table 9). For Stage 2, we used the hyperparameters shown in this table. + +
HyperparameterSearch range
Num. of covariate layers{2,4,6}
Num. of treatment layers{2,4,6}
Dim. of hidden treatment layers{128,256}
Dim. of hidden covariate layers{128,256}
Dropout{0,0.1}
Independence regularization coefficient{0,0.01,0.1,1.0}
Learning rate{3 × 10-3,1 × 10-3,3 × 10-4,1 × 10-4}
Meta learning rate{1}
Weight decay{5 × 10-3}
Reptile k{1,10,50}
L1 regularization coefficient{0,1 × 10-7,5 × 10-7}
+ +Table 11: Hyperparameter search space for GraphITE on the medical claims dataset. + +
HyperparameterSearch range
Num. of out layers{1,2,4}
Num. of r layers{2,4,6}
Num. units p out{32,64,128,256}
Num. units s out{32,64,128,256}
Num. units s r{32,64,128,256}
Num. units p r{32,64,128,256}
Weight decay{5 × 10-3}
Orthogonal penalty{0,1 × 10-5,1 × 10-3,0.1}
Private out{True, False}
Learning rate{3 × 10-3,1 × 10-3,3 × 10-4,1 × 10-4}
+ +Table 12: Hyperparameter search space for FlexTENet on the medical claims dataset. + +
HyperparameterSearch range
Num. of out layers{1,2,4}
Num. of r layers{2,4,6}
Num. units out{128,256}
Weight decay{5 × 10-3}
Penalty disc{0,1 × 10-3}
Learning rate{3 × 10-3,1 × 10-3,3 × 10-4,1 × 10-4}
+ +Table 13: Hyperparameter search space for TARNet on the medical claims dataset. + +
HyperparameterSearch range
Num. of out layers{1,2,4}
Num. of r layers{2,4,6}
Num. units r{128,256}
Num. units out{128,256}
Weight decay{5 × 10-3}
Learning rate{3 × 10-3,1 × 10-3,3 × 10-4,1 × 10-4}
+ +Table 14: Hyperparameter search space for DragonNet on the medical claims dataset. + +
HyperparameterSearch range
Num. of estimators[50, 250]
Max depth[10, 50]
Min sample split[2, 8]
Criterion regress{squared error, absolute error}
Criterion binary{gini, entropy}
Max features{sqrt, log2, auto}
+ +Table 15: Hyperparameter search space for model-agnostic CATE estimators, i.e., R-learner, X-learner, RA-learner, and T-learner on the medical claims dataset. + +
HyperparameterSearch range
Num. of layers{2,4,6}
Dim. of hidden layers{512,1024}
Dropout{0,0.1}
Learning rate{3 × 10-3,1 × 10-3,3 × 10-4,1 × 10-4}
Meta learning rate{0.1,0.5,0.9}
Weight decay{0.1}
Reptile k{1,2,3}
L1 regularization coefficient{0,1 × 10-7,5 × 10-7}
+ +Table 16: Hyperparameter search space for CaML (our proposed method) on the LINCS dataset. + +
HyperparameterSearch range
Num. of como layers{2,4,6}
Num. of covariates layers{2,4,6}
Num. of propensity layers{2,4,6}
Num. of treatment layers{2,4,6}
Dim. output{128,256}
Dim. of hidden treatment layers{128,256}
Dim. of hidden covariate layers{128,256}
Dim. of hidden como layers{128,256}
Dim. of hidden propensity layers{16,32,64,128}
Model dim.{512,1024}
Dropout{0,0.1}
Learning rate{3×10-3,1×10-3,3×10-4,1×10-4}
Meta learning rate{0.1,0.5,0.9}
Sin weight decay{0.0,0.005}
Pro weight decay{0.0,0.005}
GNN weight decay{0.0,0.005}
Weight decay{0.1}
Reptile k{1,2,3}
L1 regularization coefficient{0,1×10-7,5×10-7}
+ +Table 17: Hyperparameter search space for the SIN baseline on the LINCS dataset. + +
HyperparameterSearch range
Num. of covariate layers{2,4,6}
Num. of treatment layers{2,4,6}
Num. of layers{2,4,6}
Dim. of hidden covariate layers{128,256}
Independence regularization coefficient{0,0.01,0.1,1.0}
Dropout{0,0.1}
Model dim.{512,1024}
Learning rate{3×10-3,1×10-3,3×10-4,1×10-4}
Meta learning rate{0.1,0.5,0.9}
Weight decay{0.1}
Reptile k{1,2,3}
L1 regularization coefficient{0,1×10-7,5×10-7}
+ +Table 18: Hyperparameter search space for the GraphITE baseline on the LINCS dataset. + +
HyperparameterSearch range
Dropout[1e-4,1e-1]
Learning rate[1e-5,1e-3]
Weight decay[1e-5,1e-2]
Adversarial temperature[1,10]
Gamma[0,30]
Num. of sampled neighbors0-10
Dim. of hidden layers{64, 128, 256, 512}
+ +Table 19: The hyperparameter optimization search ranges used in the selection of the optimal model for the generation of knowledge graph node embeddings that would serve as intervention information for the medical claims dataset. \ No newline at end of file diff --git a/zeroshotcausallearning/images.zip b/zeroshotcausallearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5df02d79c26715f48cc90200b7667ad1e42961e6 --- /dev/null +++ b/zeroshotcausallearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7946f75cbebeaa18729ef465c17186481c4c685ba2b7c1f8b12464903dcaf852 +size 1880331 diff --git a/zeroshotcausallearning/layout.json b/zeroshotcausallearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a8995a8d9771055e6c6fbf4096e651cc1896796d --- /dev/null +++ b/zeroshotcausallearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f7a35977ffc7da438da4538097065ef3f8482df507976514c06bc19d4bff956 +size 1277183 diff --git a/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_content_list.json b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f67605c1456b20eb388e74f57fb1cdf053dc7a05 --- /dev/null +++ b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ffb81a612d9c32dfcb785e26e8d718570a11240106e97c3b5f1716ac958a86c +size 69211 diff --git a/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_model.json b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0a56ed297a136c5b1711bf2918676d1209d595b5 --- /dev/null +++ b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2879d9de89840622e8ea6117c88d664f70a8009498d838c185939658208ebef +size 83322 diff --git a/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_origin.pdf b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c52ed754fa083e858c550dbd47fd18e40a560f6d --- /dev/null +++ b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/3fa3d55d-dbf0-4eb3-8c16-f11c82d6a693_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f12832bff72ffd4962f98ca492c36a7a4bded96028eabdf0f8c2ab38603f4715 +size 1951767 diff --git a/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/full.md b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..844e510c89f5b588ccdc2628bf619ce83a592443 --- /dev/null +++ b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/full.md @@ -0,0 +1,242 @@ +# Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models + +Lin Li $^{1,2}$ , Jun Xiao $^{1}$ , Guikun Chen $^{1}$ , Jian Shao $^{1}$ , Yueting Zhuang $^{1}$ , Long Chen $^{2*}$ $^{1}$ Zhejiang University $^{2}$ The Hong Kong University of Science and Technology +{mukti, junx, guikun.chen, jshao, yzhuang}@zju.edu.cn, longchen@ust.hk +https://github.com/HKUST-LongGroup/RECODE + +![](images/beb75eba7aa6519ce31d135add1cb01af5c4eea28dfd04f15d9e2fe56a74e410.jpg) +Figure 1: Illustration of the challenges of VRD with similar relation categories holding and carrying. Four images and their ground-truths are on the left. The subject and object for each triplet are denoted by blue and pink boxes, respectively. (a) A child may incorrectly identify these two relations only based on similar concepts alone. (b) Using class-based prompts, CLIP always maps these two relations to adjacent locations in the semantic space. (c) We humans always utilize composite visual cues to correctly distinguish between different relations. (d) Our proposed RECODE uses LLM (e.g., GPT) to generate composite descriptions that aid the CLIP model in distinguishing between them. + +# Abstract + +Pretrained vision-language models, such as CLIP, have demonstrated strong generalization capabilities, making them promising tools in the realm of zero-shot visual recognition. Visual relation detection (VRD) is a typical task that identifies relationship (or interaction) types between object pairs within an image. However, naively utilizing CLIP with prevalent class-based prompts for zero-shot VRD has several weaknesses, e.g., it struggles to distinguish between different fine-grained relation types and it neglects essential spatial information of two objects. To this end, we propose a novel method for zero-shot VRD: RECODE, which solves RELATION detection via Composite DEscription prompts. Specifically, RECODE first decomposes each predicate category into subject, object, and spatial components. Then, it leverages large language models (LLMs) to generate description-based prompts (or visual cues) for each component. Different visual cues enhance the discriminability of similar relation categories from different perspectives, which significantly boosts performance in VRD. To dynamically fuse different cues, we further introduce a chain-of-thought method that prompts LLMs to generate rea + +![](images/76f66247b5672a087c4e148ca9f71b914bed09f772fb9b643d38f408c31b5f30.jpg) + +![](images/215029cfb4eec67a60a7a1c58bfed9e4176e969a41ad69c6834dc56b598be6e5.jpg) +Figure 2: A comparative analysis of predictions made by RECODE and baseline CLIP using class-based prompts. It illustrates how our method offers interpretability to the relation classification results through the similarity $\phi$ between the image and the description-based prompts. + +sonable weights for different visual cues. Extensive experiments on four VRD benchmarks have demonstrated the effectiveness and interpretability of RECODE. + +# 1 Introduction + +Recent advances in pretrained vision-language models (VLMs) [1, 2, 3, 4] (e.g., CLIP [1]), have shown remarkable generalization ability and achieved impressive performance on zero-shot recognition tasks. Specifically, CLIP employs two encoders: an image encoder that converts images into visual features, and a text encoder that transforms sentences into semantic features. This design allows the encoders to map different modalities into a common semantic space. When the inputs to the text encoder are class-based prompts, such as "A [CLASS]", "A photo of [CLASS]", CLIP can compare the image and prompts in the shared semantic space, thereby enabling zero-shot recognition of novel categories [1]. Compared to object recognition, visual relation detection (VRD) is much more challenging, which needs to identify the relation types between object pairs within an image in the form of \(\langle\) subject, relation, object \rangle\) [5, 6, 7, 8, 9]. It differs from object recognition in that it requires an understanding of how objects are related to each other. By crafting class-based prompts to describe these relation types, CLIP could potentially be extended to perform zero-shot VRD. + +However, this straightforward baseline presents notable challenges. Imagine you are a child asked to distinguish relation categories "holding" and "carrying", both involving a person and an object. Based on the similar concepts of "holding" (i.e., a person having an object in their hands) and "carrying" (i.e., a person supporting an object in their hands), it would be difficult to determine the correct prediction (cf., Figure 1(a)). In other words, class-based prompts of "holding" and "carrying" might be projected to adjacent locations in semantic space by CLIP, leading to a relation sensitivity issue: CLIP struggles to differentiate between the subtle nuances of similar relations. Secondly, class-based prompts overlook the unique spatial cues inherent to each relation category, leading to a spatial discriminability issue. The "holding" category generally suggests the object being at a certain height and orientation relative to the person, while "carrying" implies a different spatial position, typically with the object located lower and possibly supported by the person's entire body. The neglect of spatial cues leads to inaccuracies in distinguishing between such spatial-aware relation categories. Moreover, applying CLIP in this manner brings about a computational efficiency issue. Using CLIP requires cropping each union region of a subject-object pair separately from the original image (i.e., $N^2$ crops for $N$ proposals), leading to computational inefficiencies. + +Nonetheless, we humans can distinguish relation categories from different visual cues. For example, from the subject's perspective, we could think that in the case of "holding", a person might be standing while having an object, such as an umbrella, in their hand. Meanwhile, in the case of "carrying", a person should be in a more engaged position, perhaps walking or running with both hands and arms supporting a heavy object, like a suitcase. In addition, spatial cues also play an important role in identifying these relation categories. For example, when a person is carrying an umbrella, the umbrella is usually positioned lower and closer to the person's body compared to when the person is holding an umbrella. Based on these visual cues, we can easily identify scenarios such as "person-holding-umbrella" and "person-carrying-umbrella" as in Figure 1(c). + +Inspired by our humans' ability to extract and utilize different visual cues, we present a novel method for zero-shot VRD: RECODE, which classifies RELation via CO Composite DEscriptions. It first uses large language models (LLMs) [10], to generate detailed and informative descriptions $^2$ for different components of relation categories, such as subject, object, and spatial. These descriptions are then used as description-based prompts for the CLIP model, enabling it to focus on specific visual features that help distinguish between similar relation categories and improve VRD performance. Specifically, for the subject and object components, these prompts include visual cues such as appearance (e.g., with leg), size (e.g., small), and posture (e.g., in a sitting posture). For the spatial component, these prompts include cues related to the spatial relationships between objects, such as relative position and distance. By incorporating different visual cues, RECODE enhances the discriminability of similar relation categories, such as "riding" and "mounted" based on the different postures of the subject, e.g., "seated on the back of animal" for the subject of "riding". Similarly, spatial visual cues can be used to differentiate between "laying on" and "holding" based on the relative position between the subject and object, such as "subject above object" and "subject under object" (cf., Figure 2). + +In addition, we explore the limitations of several description generation prompts for visual cue, e.g., relation class description prompt [11], and then design a guided relation component description prompt that utilizes the high-level object categories to generate more accurate visual cues for each relation category. For instance, if the high-level category of object is "animal", the generated object descriptions for relation "riding" are tailored to the "animal" category, e.g., "with four legs", instead of the "product", e.g., "with wheels". Meanwhile, to better fuse the evidence from different visual cues, we further leverage LLMs to predict reasonable weights for different components. Particularly, we design a chain-of-thought (CoT) method [12] to break down this weight assignment problem into smaller, more manageable pieces, and prompt LLM to generate a series of rationales and weights. + +To evaluate our RECODE, we conducted experiments on four benchmark datasets: Visual Genome (VG) [13] and GQA [14] datasets for scene graph generation (SGG), and HICO-DET [15] and V-COCO [16] datasets for human-object interaction (HOI) detection. Experimental results prove the generalization and interpretability of our method. In summary, we made three main contributions in this paper: 1) We analyze the weaknesses of the prevalent class-based prompt for zero-shot VRD in detail and propose a novel solution RECODE. RECODE leverages the power of LLMs to generate description-based prompts (visual cues) for each component of the relation class, enhancing the CLIP model's ability to distinguish between various relation categories. 2) We introduce a chain-of-thought method that breaks down the problem into smaller, more manageable pieces, allowing the LLM to generate a series of rationales for each cue, ultimately leading to reasonable weights for each component. 3) We conduct experiments on four benchmark datasets and demonstrate the effectiveness and interpretability of our method. + +# 2 Approach + +Typically, VRD is comprised of two sub-tasks: object detection and relation classification [5]. Since zero-shot object detection has been extensively studied [17, 1, 11], in this paper, we primarily focus on zero-shot relation classification. Specifically, given the bounding boxes (bboxes) $\{b_i\}$ and object categories $\{o_i\}$ of all objects, our target is to predict the visual relation (or predicate/interaction) categories $\{r_{ij}\}$ between pairwise objects. To facilitate presentation, we use $s$ , $o$ , and $p$ to denote the subject, object, and their spatial position in a triplet respectively, and $r$ to denote the relation category. + +Class-based Prompt Baseline for Zero-Shot VRD. Following recent zero-shot object recognition methods, a straightforward solution for zero-shot VRD is the CLIP with class-based prompt. Specif- + +![](images/d7a69b01197cdc607c70741236fc4e6787571f731f1191a14c3a2c033a15ffb9.jpg) +Figure 3: The framework of RECODE. 1) Visual feature decomposing decomposes the triplet into subject, object, and spatial features. 2) Semantic feature decomposing decomposes relation categories into subject, object, and spatial descriptions. 3) Relation classification calculates similarities between decomposed visual and semantic features and applies softmax to obtain the probability distribution. + +![](images/360e7ef99f0f290ceef8498e526f1752e20ba7bea8902396e6d648e8256abb2d.jpg) + +![](images/1a03762b410b4e1ef3f395a904c58bde3290b1605b6224531efb3ff67c7c215a.jpg) + +ically, a pretrained CLIP image encoder $V(\cdot)$ and a pretrained CLIP text encoder $T(\cdot)$ are used to classify pairwise objects with a set of relation classes. For each relation class, a natural language class-based prompt $p_c$ is generated, incorporating the relation information, e.g., "[REL-CLS]-ing/ed" or "a photo of [REL-CLS]". Each prompt is then passed through $T(\cdot)$ to get semantic embedding $t$ , while the union region of a subject-object pair is passed through $V(\cdot)$ to get visual embedding $v$ . The cosine similarity between $v$ and $t$ of different relation categories is calculated and processed by a softmax function to obtain the probability distribution over all relation categories. + +# 2.1 Zero-shot VRD with Composed Visual Cues + +To overcome the limitations of class-based prompts, we propose a novel approach RECODE for zero-shot VRD. It consists of three parts: visual feature decomposing, semantic feature decomposing, and relation classification (cf., Figure 3). In the first two parts, we decompose the visual features of the triplet into subject, object, and spatial features, and then generate semantic features for each component. In the last part, we calculate the similarities between the decomposed visual features and a set of semantic features, and aggregate them to get the final predictions over all relations. + +Visual Feature Decomposing. To enhance spatial discriminability and computational efficiency, we decompose the visual features of a triplet into subject, object, and spatial features. For subject and object features, we crop the regions of the subject and object from the original image using the given bboxes $b_{s}$ and $b_{o}$ , and encode them into visual embeddings $v_{s}$ and $v_{o}$ using the image encoder $V(\cdot)$ of CLIP. For spatial features, we aim to obtain the spatial relationship between the subject and object based on their bounding boxes. However, directly obtaining all spatial images based on the given bounding boxes is computationally expensive due to the diversity of spatial positions ( $N^2$ each image). To address this, we simulate the spatial relationship between the subject and object using a finite set of spatial images, represented by red and green bboxes respectively. We define four attributes (shape, size, relative position, and distance) based on bounding box properties. Each attribute is assigned a finite set of values to construct a finite set of simulated spatial images. For a given triplet, we match the calculated attribute values with the most similar simulated image3. The matched spatial image is then encoded into a visual embedding $v_{p}$ using $V(\cdot)$ of CLIP. + +Semantic Feature Decomposing. To improve the CLIP model's ability to distinguish between different relation classes, we incorporate a set of description-based prompts $D$ to augment the original class-based prompt for each relation category. For the subject and object components, we generate a set of description-based prompts $D_{s}$ and $D_{o}$ to provide additional visual cue information, the generation process is described in Sec. 2.2. These prompts contain object categories with specific visual cues that highlight the unique characteristics of the relation being performed, e.g., "women, with legs", which enhances the discriminability between similar relation categories. For the spatial component, it only contains a set of description-based prompts $D_{p}$ that include information about the relative position and distance between the subject and object in the image. By incorporating + +![](images/275670c3dceb612386ab54015877bc6282af99463082530518e8a3bb8a46beb3.jpg) +Figure 4: Examples of different prompts used for generating descriptions of visual cues. (a) Relation class description generates descriptions for each relation class directly. (b) Relation component description generates descriptions for each component of the relation separately. (c) Guided relation component description incorporates high-level object category guide generation process. + +this additional information, we aim to distinguish between relations based on spatial location. After generating these sets of description-based prompts, we obtain semantic embeddings $\{t_{d_{s_i}}\}, \{t_{d_{o_i}}\}$ , and $\{t_{d_{p_i}}\}$ using a text encoder $T(\cdot)$ , separately. These embeddings, along with the class-based prompt embedding $t_c$ , are used for relation classification. + +Relation Classification. In this step, we compute the similarity score between the visual and semantic features to obtain the relation probability distribution. We first calculate the cosine similarity $\phi(\cdot, \cdot)$ between each visual embedding and semantic embedding for each relation category $r$ . The final score incorporates both class-based and description-based prompts, and is calculated as follows: + +$$ +S (r) = \underbrace {\phi \left(v _ {s} , t _ {c}\right) + \phi \left(v _ {o} , t _ {c}\right)} _ {\text {c l a s s - b a s e d p r o m p t s}} + \underbrace {\sum_ {k \in \{s , o , p \}} \frac {w _ {k}}{\left| D _ {k} (r) \right|} \left[ \sum_ {d _ {k _ {i}} \in D _ {k} (r)} \phi \left(v _ {k} , t _ {d _ {k _ {i}}}\right) \right]} _ {\text {d e s c r i p t i o n - b a s e d p r o m p t s}}, \tag {1} +$$ + +where $w_{k}$ represents the importance of visual cues for each component $k \in \{s, o, p\}$ , and $|D_{k}(r)|$ denotes the number of visual cues in $D_{k}(r)$ for relation category $r$ . We compute the similarity of individual visual cues for each component and then obtain their average. The weights of different components are determined by a LLM, which will be discussed in Sec. 2.2. Finally, we apply a softmax operation to the scores to obtain the probability distribution over all relation categories. + +# 2.2 Visual Cue Descriptions and Weights Generation + +LLMs, such as GPT [10], have been shown to contain significant world knowledge. In this section, we present the process of generating descriptions of visual cue $D_{s}$ , $D_{o}$ , and $D_{p}$ , as well as the weights $w_{s}$ , $w_{o}$ , and $w_{p}$ for each component of each relation category using LLMs. + +# 2.2.1 Visual Cue Descriptions + +In this section, we explore methods for generating descriptions of visual cues for relation decomposition. Inspired by the work [11] of zero-shot image classification, we first propose relation class description prompt, which generates descriptions from the perspective of class-level (cf., Figure 4(a)). It has the advantage of producing descriptions that are easy to interpret and understand. However, it may result in overly diverse and information-rich descriptions that could hinder the extraction of meaningful visual cues, e.g., "speed of the person" in Figure 4(a). + +To address this limitation, we then consider another relation component description prompt, which involves decomposing the relation into its subject and object components and generating descriptions of their visual features separately (cf., Figure 4(b)). While this type of prompt allows for more focused and specific descriptions of visual cues, it may not be effective in capturing the variations in visual features between different subject-object category pairs. For example, "man-riding-horse" and "person-riding-bike" typically have totally different visual features for the object. The visual cues "reins" and "saddle" of the object in Figure 4(b) are inappropriate for a "bike". + +(a) Weight Prompt + +![](images/70a0e6b673f546b15bbcc882edbd94f9167348170d0d54452213cd508b5548f6.jpg) + +Suppose you are a relation classification model. +Given: subject belongs to "human" and object belongs to "product". + +The visual features of subject: with eyes directed towards the object, with head upright. + +The visual features of object: with visible features such as front, display, or screen. + +The visual features of position: subject positioned either above, below, left or right of the object at a mid distance. Q: How do you weight these visual features to determine the relation is "looking at"? The sum of weights must be 1.0! + +![](images/47f34b1e9f70945179659824c7e0c4974b492e0857a51bdc7e4ebbf22e3d4aaa.jpg) +Figure 5: Illustration of the effectiveness of CoT method in generating reasonable visual cue weights. (a) Prompt without CoT. LLM assigns same weights for subject and object. (b) Prompt with CoT. LLM analyzes the importance of each cue step by step and assigns more reasonable weights. + +Visual features of subject: 0.4 + +Visual features of object: 0.4 +Visual features of position: 0.2 + +![](images/84ebf9ee8cda897de03e5d18e4b2f48cb39f6bd5d0b2e34a004f97c4ee863240.jpg) + +(b) Weight Prompt with Chain-of-Thought + +![](images/b6efa6f679e2eb9f38934eadbfad10b768f39d141dbf5c6a4c436eac654ab10d.jpg) + +The sum of weights must be 1.0! + +A: Let's think step by step! (CoT prompt) + +![](images/35f5362c16ecc3bb9b2ca1ba4aceadf52cedb993bcf80fc42fd248e519a71ce6.jpg) + +First, we need to consider the importance of the subject's visual features. Since the direction of the eyes and head position strongly indicate the focus of attention, we will give them a weight of 0.6. + +Next, we need to consider the importance of the object's visual features. Since the visible features such as front, display, or screen indicate that the object is something that can be looked at, we will give them a weight of 0.3. + +Finally,..., we will give them a weight of 0.1. + +Therefore, we can weight these visual features as follows: + +Visual features of subject: 0.6 +Visual features of object: 0.3 +Visual features of position: 0.1 + +![](images/ed7b724486d7b1d9e6842cbfe9a74f1a2e2135ba4bb99c66c1547fabd234a146.jpg) + +Therefore, we design guided relation component description prompt. It builds upon the second method by incorporating the high-level category information of the object into the generation process, leading to more accurate and informative descriptions of the visual features of both the subject and object components (cf., Figure 4(c)). To achieve this, we classify the object into high-level classes, such as "human", "animal", and "product", to guide the description generation. For example, "bike" is classified as "product", and "horse" is classified as "animal". This allows for the separate generation of visual feature descriptions for each high-level object class, e.g., "a harness or saddle on its body" for "animal", resulting in more precise and relevant visual cues for each relation category3. + +# 2.2.2 Visual Cue Weights + +Intuitively, different combinations of visual cues may have varying degrees of importance in relation classification. For example, for relation "looking at", the visual cue "with visible features" of the object may not be as informative as the visual cue "with eye" of the subject. To account for this, we leverage the impressive knowledge and reasoning abilities of LLMs to analyze the discriminative power of different visual cues and dynamically assign weights accordingly. Specifically, we provide each combination of visual cues as input to LLM and prompt it to determine the appropriate weight for each cue for distinguishing the given predicate. The prompts used for this purpose are in Figure 5. + +Chain-of-Thought (CoT) Prompting. To ensure the generated weights are reasonable, we utilize a CoT method that has demonstrated remarkable reasoning abilities [12, 18]. Specifically, we prompt the LLM to generate rationales by using the stepwise reasoning prompt "Let's think step by step!" to break down the problem into smaller, more manageable pieces. Then LLM generates a series of rationales, and those that lead to the reasonable weights. For example in Figure 5, we demonstrate the importance of the CoT method in generating more accurate weights. Without the stepwise reasoning prompt, LLM generates the same weight for both the subject and object visual cues for "looking at", which is clearly unreasonable. However, with the CoT prompt, LLM is able to analyze each cue step by step, leading to a more accurate assignment of weights, i.e., the cues about the subject are relatively more important. In order to standardize the format of the strings generated by LLMs for extracting different components of visual cues and weights, we make certain modifications to the prompts for descriptions and weights3. + +# 3 Experiment + +# 3.1 Experiment setup + +Datasets. We evaluated our method on four zero-shot VRD benchmarks: 1) VG [13] contains 26,443 images for testing, each annotated with object and predicate labels to form a scene graph. Following previous works [5], we used the pre-processed VG with 150 object classes. We adopted the 24 semantic predicate classes proposed in [19, 20], as they are more informative and challenging for classifying. 2) GQA [14] is a large-scale SGG dataset. We used the same split provided by [21], which contains 8,208 images for testing with 200 object classes. As for predicate classes, we selected + +Table 1: Evaluation results on the test set of VG and GQA datasets. † denotes removing the guidance from high-level object category. ☆ denotes integrated with Filter strategy. + +
DataMethodPredicate Classification
R@20\( \bigtriangleup \)R@50\( \bigtriangleup \)R@100\( \bigtriangleup \)mR@20\( \bigtriangleup \)mR@50\( \bigtriangleup \)mR@100\( \bigtriangleup \)
VGCLS7.2-10.9-13.2-9.4-14.0-17.6-
CLSDE7.0-0.210.6-0.312.9-0.38.5-0.913.6-0.416.9-0.7
RECODE†7.30.111.20.315.42.28.2-1.213.5-0.518.30.7
RECODE9.72.514.94.019.36.110.20.816.42.422.75.1
RECODE*10.63.418.37.425.011.810.71.318.74.727.810.2
GQACLS5.6-7.7-9.9-6.3-9.5-12.2-
CLSDE5.4-0.27.2-0.59.3-0.66.0-0.38.8-0.711.5-0.7
RECODE†5.2-0.47.80.110.20.35.8-0.58.9-0.611.3-0.9
RECODE6.30.79.41.711.81.97.81.511.92.415.12.9
RECODE*7.01.411.13.415.45.59.43.114.85.320.48.2
+ +26 semantic predicate classes by referring to VG. 3) HICO-DET [15] contains 9,658 testing images annotated with 600 HOI triplets derived from combinations of 117 verb classes and 80 object classes. 4) V-COCO [16] comprises 4,946 testing images annotated with 29 action categories. + +Evaluation Metrics. For SGG datasets (i.e., VG and GQA), we reported Recall@K ( $\mathbf{R}@\mathbf{K}$ ) which indicates the proportion of ground-truths that appear among the top-K confident predictions, and mean Recall@K ( $\mathbf{mR}@\mathbf{K}$ ) which averages $\mathbf{R}@\mathbf{K}$ scores calculated for each category separately [22]. For HOI datasets (i.e., HOI-DET and V-COCO), we reported mean Average Precision ( $\mathbf{mAP}$ ) [23]. + +Implementation Details. For the LLM, we employed the GPT-3.5-turbo, a highly performant variant of the GPT model. As for CLIP, we leveraged the OpenAI's publicly accessible resources, specifically opting for the Vision Transformer with a base configuration (ViT-B/32) as default backbone3. + +Settings. The bounding box and category of objects were given in all experiments. We compared our RECODE with two baselines: 1) CLS, which uses relation-CLasS-based prompts (e.g., "riding") to compute the similarity between the image and text. 2) CLSDE, which uses prompts of relation CLasS DEscription as shown in Figure 4(a). Each component of the proposed framework can serve as a plug-and-play module for zero-shot VRD. Specifically, 1) Filter, which denotes filtering those unreasonable predictions (e.g., kid-eating-house) with the rules generated by $\mathrm{GPT}^3$ . 2) Cue, which denotes using description-based prompts (Sec. 2.1). 3) Spatial, which denotes using spacial images as additional features. 4) Weight, which denotes using dynamic weights generated by GPT to determine the importance of each feature, i.e., visual cue weights. + +# 3.2 Results and Analysis + +In this work, we evaluated the prediction performance of the proposed framework on two related tasks, i.e., SGG and HOI. The former outputs a list of relation triplet $\langle \text{sub}, \text{pred}, \text{obj} \rangle$ , while the latter just fix the category of sub to human. Overall, our method achieved significant improvement on the two tasks compared to the CLS baseline, which shows the superiority of our method. + +Evaluation on SGG. From the results in Table 1, we have the following observations: 1) CLSDE showed worse performance than the trivial CLS baseline. This is because the considerable noise in CLSDE which may hinder the model to attend the most distinguishable parts. 2) With the proper guidance, RECODE achieved considerable improvements compared to the baselines, e.g., $0.8\%$ to $6.1\%$ gains on VG and $0.7\%$ to $2.9\%$ gains on GQA. The + +Table 2: Evaluation results on the test set of HICO-DET and V-COCO datasets. + +
MethodHICO-DETV-COCO
FullRareNon-RareRole 1Role 2
CLS32.333.231.825.528.6
CLSDE32.533.132.225.628.8
RECODE†32.533.032.425.728.8
RECODE32.733.232.526.029.0
+ +performance drops of RECODE† also demonstrated the importance of guidance from high-level object categories during the generation process. 3) Integrated with the filtering strategy, i.e., RECODE*, achieved the best performance over all metrics, which suggests that commonsense knowledge is complementary and effective for zero-shot VRD. It also demonstrated that CLIP struggles to distinguish abstract concepts, i.e., relation sensitivity as mentioned in Sec. 1. + +Table 3: Ablation studies on different architectures of CLIP. The official released weights are used. + +
ArchitectureMethodPredicate Classification
R@20R@50R@100mR@20mR@50mR@100
ViT-L/14CLS*8.315.021.57.614.224.2
RECODE*11.219.928.09.118.528.1
ViT-L/14@336pxCLS*8.615.421.87.713.923.0
RECODE*12.121.129.29.719.528.2
ViT-B/32CLS*7.513.719.49.115.924.0
RECODE*10.618.325.010.718.727.8
ViT-B/16CLS*8.615.522.19.817.225.2
RECODE*12.621.028.512.520.230.0
+ +Evaluation on HOI. Since standard evaluation procedure of HOI had filtered out those unreasonable predictions, RECODE* was not evaluated here. From the results in Table 2, we can observe that the performance gains were lower than those on SGG, e.g., $0.0\%$ to $0.7\%$ gains on HICO-DET and $0.4\%$ to $0.5\%$ gains on V-COCO. The reasons are two-fold. On the one hand, since the category of subject is always a human, its features are too similar to be distinguished by CLIP. On the other hand, some of the actions are very similar in appearance. For example, distinguishing between actions like "person-throw-sports ball" and "person-catch-sports ball" is challenging due to their visual similarity. + +# 3.3 Diagnostic Experiment + +Architectures. We investigated the impact of changing the architectures of CLIP, as shown in Table 3. From the results, we can observe consistent improvements regardless of the architecture used. + +Key Component Analysis. The results are summarized in Table 4. The first row refers to the CLS baseline. Four crucial conclusions can be drawn. First, with the guidance of Cue, consistent improvements can be observed, e.g., $0.2\%$ to $3.4\%$ gains on $\mathrm{R@K}$ w/o Filter and $1.3\%$ to $4.1\%$ gains on $\mathrm{R@K}$ with Filter. Second, by introducing the spatial feature, the relative position of subject and object is considered, resulting in notable perfor + +Table 4: Analysis of key components on the test set of VG. + +
FilterCueSpatialWeightPredicate Classification
R@20R@50R@100mR@20mR@50mR@100
7.210.913.29.414.017.6
7.412.316.69.014.019.5
9.113.417.49.315.020.3
7.913.417.79.314.720.5
9.714.919.310.216.422.7
7.513.719.49.115.924.0
8.815.923.510.317.226.2
9.316.322.510.118.125.5
10.017.524.810.417.826.7
10.618.325.010.718.727.8
+ +mance gains on R@K (0.8% to 1.7%) and mR@K (0.3% to 1.0%) w/o Filter compared to just using Cue. This is because the spatial feature is of importance for relation detection [6]. Third, benefiting from the impressive reasoning ability of LLMs, the proposed weighting strategy can determine the importance of different cues, thus achieving further improvements, e.g., 0.5% to 1.1% gains on R@K compared to average aggregation. Fourth, by filtering those unreasonable predictions, consistent improvements can be observed. The reason may be that the performance of relation detection of CLIP is not accurate enough. Empirically, commonsense knowledge is a feasible way to filter those noise. Combining all components allows for the best overall performance on all evaluation metrics. + +Case study. To investigate the most important regions for distinguishing relations, we visualized the attention map given different images and prompts (cf., Figure 6). From the visualization of class-based prompts, we can observe that CLIP may attend those regions unrelated to the query prompts, e.g., focusing on the body of a person given relation "growing on". We attribute this phenomenon to the insufficient information within given prompts, which is also our motivation to introduce visual cue descriptions. As for description-based prompts, CLIP can attend to right regions with the guidance of descriptions, e.g., focusing on colorful patterns on the product given relation "painted on". + +![](images/194703088e05095203f8c1bd9b15324c306ced1bc0e20b36ff84c1e87a962de8.jpg) +Figure 6: Visualization of CLIP attention maps on input images with different prompts. The right side shows the partial description-prompts generated for each predicate category given the high-level object category. They are used to generate the corresponding attention maps on the right. + +# 4 Related Work + +Visual Relation Detection (VRD) aims to predict the relationships of given subject-object pairs, which can be viewed as a pair-wise classification task and have been widely studied in the image domain, e.g., scene graph generation (SGG) [5, 6, 22, 24] and human-object interaction (HOI) detection [25, 26, 27]. Previous solutions mainly focus on learning representations from the training samples on pre-defined categories, which may suffer from noisy annotations [22] or long-tailed predicate distribution [6, 28] and are far from the needs of the real-world scenarios. Recently, some attempts [8, 29] adopted prompt-tuning [30] to predict unseen categories during inference. However, since the learnable prompts may be overfitting when trained on seen categories, their performance is sensitive to the split of seen/unseen categories [31]. In contrast, our method can predict the relationships directly without any training samples, and has better interpretability and generalization ability, especially in rare informative relation categories. + +Zero-shot Visual Recognition enables the model to recognize new categories that it has never seen during training, which is one of the research hotspots in the vision community. Aligning visual representations to pre-trained word embeddings (e.g., Word2Vec [32] and GloVe [33]) is an intuitive and feasible way to achieve this goal [34]. More recently, VLMs, which use contrastive learning [35] to learn a joint space for vision and language, have demonstrated their impressive zero-shot ability [1]. Therefore, many zero-shot works [36, 37, 38] adopted such VLMs as their basic component to use the knowledge of the learned joint space. However, most of them only utilized the class name of unseen categories during inference, which makes an over-strong assumption that the text encoder project proper embeddings with only category names [11]. Then, Menon and Vondrick [11] proposed to query LLMs for the rich context of additional information. Nonetheless, it is non-trivial to apply such paradigms to VRD as discussed in Sec. 1. To the best of our knowledge, we are the first to leverage both LLMs and VLMs for VRD in an efficient, effective, and explainable way. + +# 5 Conclusion + +In this paper, we proposed a novel approach for zero-shot Visual Relationship Detection (VRD) that leverages large language models (LLMs) to generate detailed and informative descriptions of visual cues for each relation category. The proposed method addresses the limitations of traditional class-based prompts and enhances the discriminability of similar relation categories by incorporating specific visual cues. Moreover, we introduced a chain-of-thought method that breaks down the problem into smaller, more manageable pieces, allowing the LLM to generate a series of rationales for each visual cue and ultimately leading to reasonable weights. Our experiments on four benchmark datasets demonstrated the effectiveness and interpretability of our method. + +Acknowledgement. This work was supported by the National Key Research & Development Project of China (2021ZD0110700), the National Natural Science Foundation of China (U19B2043, 61976185), and the Fundamental Research Funds for the Central Universities (226-2023-00048). Long Chen is supported by HKUST Special Support for Young Faculty (F0927), and HKUST Sports Science and Technology Research Grant (SSTRG24EG04). + +# References + +[1] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763, 2021. +[2] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In CVPR, pages 7959-7971, 2022. +[3] Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. In Findings of NAACL, 2022. +[4] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, pages 4904-4916, 2021. +[5] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In CVPR, pages 5410–5419, 2017. +[6] Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. Unbiased scene graph generation from biased training. In CVPR, pages 3716-3725, 2020. +[7] Xingchen Li, Long Chen, Jian Shao, Shaoning Xiao, Songyang Zhang, and Jun Xiao. Rethinking the evaluation of unbiased scene graph generation. In BMVC, 2022. +[8] Tao He, Lianli Gao, Jingkuan Song, and Yuan-Fang Li. Towards open-vocabulary scene graph generation with prompt-based finetuning. In ECCV, 2022. +[9] Lin Li, Guikun Chen, Jun Xiao, Yi Yang, Chunping Wang, and Long Chen. Compositional feature augmentation for unbiased scene graph generation. In ICCV, pages 21685-21695, 2023. +[10] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NIPS, 2020. +[11] Sachit Menon and Carl Vondrick. Visual classification via description from large language models. In ICLR, 2022. +[12] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In NIPS, 2022. +[13] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017. + +[14] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, pages 6700-6709, 2019. +[15] Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. Hico: A benchmark for recognizing human-object interactions in images. In ICCV, pages 1017–1025, 2015. +[16] Saurabh Gupta and Jitendra Malik. Visual semantic role labeling. arXiv preprint arXiv:1505.04474, 2015. +[17] Caixia Yan, Xiaojun Chang, Minnan Luo, Huan Liu, Xiaogin Zhang, and Qinghua Zheng. Semantics-guided contrastive network for zero-shot object detection. TPAMI, 2022. +[18] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022. +[19] Neural motifs: Scene graph parsing with global context. In CVPR, 2018. +[20] Anh Duc Bui, Soyeon Caren Han, and Josiah Poon. Sg-shuffle: Multi-aspect shuffle transformer for scene graph generation. In Australasian Joint Conference on Artificial Intelligence, pages 87-101. Springer, 2022. +[21] Xingning Dong, Tian Gan, Xuemeng Song, Jianlong Wu, Yuan Cheng, and Liqiang Nie. Stacked hybrid-attention and group collaborative learning for unbiased scene graph generation. In CVPR, pages 19427–19436, 2022. +[22] Lin Li, Long Chen, Yifeng Huang, Zhimeng Zhang, Songyang Zhang, and Jun Xiao. The devil is in the labels: Noisy label correction for robust scene graph generation. In CVPR, pages 18869-18878, 2022. +[23] Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. Learning to detect human-object interactions. In WACV, pages 381–389, 2018. +[24] Yao Teng and Limin Wang. Structured sparse r-cnn for direct scene graph generation. In CVPR, pages 19437-19446, 2022. +[25] Keizo Kato, Yin Li, and Abhinav Gupta. Compositional learning for human object interaction. In ECCV, pages 234-251, 2018. +[26] Dong-Jin Kim, Xiao Sun, Jinsoo Choi, Stephen Lin, and In So Kweon. Detecting human-object interactions with action co-occurrence priors. In ECCV, pages 718-736, 2020. +[27] Yue Liao, Aixi Zhang, Miao Lu, Yongliang Wang, Xiaobo Li, and Si Liu. Gen-vlkt: Simplify association and enhance interaction understanding for hoi detection. In CVPR, pages 20123-20132, 2022. +[28] Guikun Chen, Lin Li, Yawei Luo, and Jun Xiao. Addressing predicate overlap in scene graph generation with semantic granularity controller. In ICME, pages 78-83. IEEE, 2023. +[29] Kaifeng Gao, Long Chen, Hanwang Zhang, Jun Xiao, and Qianru Sun. Compositional prompt tuning with motion cues for open-vocabulary video relation detection. In ICLR, 2023. +[30] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1-35, 2023. +[31] Mingfei Gao, Chen Xing, Juan Carlos Niebles, Junnan Li, Ran Xu, Wenhao Liu, and Caiming Xiong. Open vocabulary object detection with pseudo bounding-box labels. In ECCV, pages 266-282, 2022. +[32] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. +[33] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543, 2014. + +[34] Xiaolong Wang, Yufei Ye, and Abhinav Gupta. Zero-shot recognition via semantic embeddings and knowledge graphs. In CVPR, pages 6857-6866, 2018. +[35] Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. In NIPS, 2020. +[36] Alireza Zareian, Kevin Dela Rosa, Derek Hao Hu, and Shih-Fu Chang. Open-vocabulary object detection using captions. In CVPR, pages 14393-14402, 2021. +[37] Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Open-vocabulary object detection via vision and language knowledge distillation. In ICLR, 2022. +[38] Tao Wang and Nan Li. Learning to detect and segment for open vocabulary object detection. In CVPR, 2023. \ No newline at end of file diff --git a/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/images.zip b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3d5595cb421ea1cc79abf6cdc49c225b89aa96c2 --- /dev/null +++ b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adc97e87dbe7e334d283088622a39ed8bcbef7e4c48c1217731129650711cb05 +size 787357 diff --git a/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/layout.json b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9fd51c99500e667ec3d8ca14ef8f67da61efc452 --- /dev/null +++ b/zeroshotvisualrelationdetectionviacompositevisualcuesfromlargelanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:320ee5458873c875a55229d303fba44918916b514033d002af9cb5f12a5f6b67 +size 320865 diff --git a/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_content_list.json b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..da564b6d3bbc55bddff6cea3ad7035f61ea9f76b --- /dev/null +++ b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f264e8df39221c75b70b70d9d42877e33afc031e67670f3ad0f476e48e0e150c +size 166540 diff --git a/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_model.json b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6e010b7deac1c58b53f92dd7e4930bec5da4aa7d --- /dev/null +++ b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47cab70b54879bff93f3c76a0209cae7c8e745fdfb99b4a0f9966c4341c58dbe +size 199455 diff --git a/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_origin.pdf b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..472eef027f625fd6e8e4bd3a0eaa54dd096a51ac --- /dev/null +++ b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/94ce3858-d766-4b1a-b79a-86816d426129_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b887f30ddd664ca0d025eb7228a1d5f29499e73add392f0c79155aadeaa6f552 +size 442869 diff --git a/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/full.md b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d66c741b5559badb5092e42d431452bfbca29914 --- /dev/null +++ b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/full.md @@ -0,0 +1,947 @@ +# Zero-sum Polymatrix Markov Games: Equilibrium Collapse and Efficient Computation of Nash Equilibria + +Fivos Kalogiannis + +Department of Computer Science + +University of California, Irvine + +Irvine, CA + +fkalogia@uci.edu + +Ioannis Panageas + +Department of Computer Science + +University of California, Irvine + +Irvine, CA + +ipanagea@ics.uci.edu + +# Abstract + +The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al., 2023) indicate that computing Nash equilibria in multi-player Markov games is a computationally hard task. This fact raises the question of whether or not computational intractability can be circumvented if one focuses on specific classes of Markov games. One such example is two-player zero-sum Markov games, in which efficient ways to compute a Nash equilibrium are known. Inspired by zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of zero-sum multi-agent Markov games in which there are only pairwise interactions described by a graph that changes per state. For this class of Markov games, we show that an $\epsilon$ -approximate Nash equilibrium can be found efficiently. To do so, we generalize the techniques of (Cai et al., 2016), by showing that the set of coarse-correlated equilibria collapses to the set of Nash equilibria. Afterwards, it is possible to use any algorithm in the literature that computes approximate coarse-correlated equilibria Markovian policies to get an approximate Nash equilibrium. + +# 1 Introduction + +Multi-agent reinforcement learning (MARL) is the discipline that is concerned with strategic interactions between agents who find themselves in a dynamically changing environment. Early aspects of MARL can be traced back to, as early as, the initial text on two-player zero-sum stochastic/Markov games (Shapley, 1953). Today, Markov games have been established as the theoretical framework for MARL (Littman, 1994). The connection between game theory and MARL has lead to several recent cornerstone results in benchmark domains in AI (Bowling et al., 2015; Brown and Sandholm, 2019, 2018; Brown et al., 2020; Silver et al., 2017; Moravčík et al., 2017; Perolat et al., 2022; Vinyals et al., 2019). The majority of the aforementioned breakthroughs relied on computing Nash equilibria (Nash, 1951) in a scalable and often decentralized manner. Although the theory of single agent reinforcement learning (RL) has witnessed an outstanding progress (e.g., see (Agarwal et al., 2020; Bertsekas, 2000; Jin et al., 2018; Li et al., 2021; Luo et al., 2019; Panait and Luke, 2005; Sidford et al., 2018; Sutton and Barto, 2018), and references therein), the landscape of multi-agent settings eludes a thorough understanding. In fact, guarantees for provably efficient computation of Nash equilibria remain limited to either environments in which agents strive to coordinate towards a shared goal (Chen et al., 2022; Claus and Boutilier, 1998; Ding et al., 2022; Fox et al., 2022; Leonardos et al., 2021; Maheshwari et al., 2022; Wang and Sandholm, 2002; Zhang et al., 2021) or fully competitive such as two-player zero-sum games (Cen et al., 2021; Condon, 1993; Daskalakis et al., 2020; Sayin et al., 2021, 2020; Wei et al., 2021) to name a few. Part of the lack of efficient algorithmic results in MARL is the fact that computing approximate Nash equilibria in (general-sum) games is computationally intractable (Daskalakis et al., 2009; Rubinstein, 2017; Chen et al., 2009; Etessami and Yannakakis, 2010) even when the games have a single state, i.e., normal-form two-player games. + +We aim at providing a theoretical framework that captures an array of real-world applications with multiple agents — which admittedly correspond to a big portion of all modern applications. A recent contribution that computes NE efficiently in a setting that combines both collaboration and competition, (Kalogiannis et al., 2022), concerns adversarial team Markov games, or competition between an adversary and a group of uncoordinated agents with common rewards. Efficient algorithms for computing Nash equilibria in settings that include both cooperation and competition are far fewer and tend to impose assumptions that are restrictive and difficult to meet in most applications (Bowling, 2000; Hu and Wellman, 2003). The focus of our work is centered around the following question: + +Are there any other settings of Markov games that encompass both competition and coordination while maintaining the tractability of Nash equilibrium computation? $(\star)$ + +Inspired by contemporary works in algorithmic game theory and specifically zero-sum polymatrix normal-form games (Cai et al., 2016), we focus on the problem of computing Nash equilibria in zero-sum polymatrix Markov games. Informally, a polymatrix Markov game is a multi-agent Markov decision process with $n$ agents, state-space $S$ , action space $\mathcal{A}_k$ for agent $k$ , a transition probability model $\mathbb{P}$ and is characterized by a graph $\mathcal{G}_s(\mathcal{V},\mathcal{E}_s)$ which is potentially different in every state $s$ . For a fixed state $s$ , the nodes of the graph $\mathcal{V}$ correspond to the agents, and the edges $\mathcal{E}_s$ of the graph are two-player normal-form games (different per state). Every node/agent $k$ has a fixed set of actions $\mathcal{A}_k$ and chooses a strategy from this set to play in all games corresponding to adjacent edges. Given an action profile of all the players, the node's reward is the sum of its rewards in all games on the edges adjacent to it. The game is globally zero-sum if, for all strategy profiles, the rewards of all players add up to zero. Afterwards, the process transitions to a state $s'$ according to $\mathbb{P}$ . In a more high-level description, the agents interact over a network whose connections change at every state. + +Our results. We consider a zero-sum polymatrix Markov game with the additional property that a single agent (not necessarily the same) controls the transition at each state, i.e., the transition model is affected by a single agent's actions for each state $s$ . These games are known as switching controller Markov games. We show that we can compute in time poly $(|S|, n, \max_{i \in [n]} |\mathcal{A}_i|, 1 / \epsilon)$ an $\epsilon$ -approximate Nash equilibrium. The proof relies on the fact that zero-sum polymatrix Markov games with a switching controller have the following important property: the marginals of a coarse-correlated equilibrium constitute a Nash equilibrium (see Section 3.2). We refer to this phenomenon as equilibrium collapse. This property was already known for zero-sum polymatrix normal-form games by Cai et al. (2016) and our results generalize the aforementioned work for Markov games. As a corollary, we get that any algorithm in the literature that guarantees convergence to approximate coarse-correlated equilibria Markovian policies—e.g., (Daskalakis et al., 2022)—can be used to get approximate Nash equilibria. Our contribution also unifies previous results that where otherwise only applicable to the settings of single and switching-control two-player zero-sum games, or zero-sum polymatrix normal-form games. Finally, we show that the equilibrium collapsing phenomenon does not carry over if there are two or more controllers per state (see Section 3.3). + +Technical overview. In order to prove our results, we rely on nonlinear programming and, in particular, nonlinear programs whose optima coincide with the Nash equilibria for a particular Markov game (Filar et al., 1991; Filar and Vrieze, 2012). Our approach is analogous to the one used by (Cai et al., 2016) which uses linear programming to prove the collapse of the set of CCE to the set of NE. Nevertheless, using the duality of linear programming in our case is not possible since a Markov game introduces nonlinear terms in the program. It is noteworthy that we do not need to invoke (Lagrangian) duality or an argument that relies on stationary points of a Lagrangian function. Rather, we use the structure of the zero-sum polymatrix Markov games with a switching controller to conclude the relation between a correlated policy and the individual policies formed by its marginals in terms of the individual utilities of the game. + +# 1.1 Importance of zero-sum polymatrix Markov games + +Strategic interactions of agents over a network is a topic of research in multiple disciplines that span computer science (Easley and Kleinberg, 2010), economics (Schweitzer et al., 2009), control theory (Tipsuwan and Chow, 2003), and biology (Szabó and Fath, 2007) to name a few. + +In many environments where multiple agents interact with each other, they do so in a localized manner. That is, every agent is affected by the set of agents that belong to their immediate "neighborhood". Further, it is quite common that these agents will interact independently with each one of their + +neighbors; meaning that the outcome of their total interactions is a sum of pairwise interactions rather than interactions that depend on joint actions. Finally, players might remain indifferent to actions of players are not their neighbors. + +To illustrate this phenomenon we can think of multiplayer e-games (e.g., CS:GO, Fortnite, League of Legends, etc) where each player interacts through the same move only with players that are present on their premises and, in general, the neighbors cannot combine their actions into something that is not a mere sum of their individual actions (i.e., they rarely can "multiply" the effect of the individual actions). In other scenarios, such as strategic games played on social networks (e.g., opinion dynamics) agents clearly interact in a pairwise manner with agents that belong to their neighborhood and are somewhat oblivious to the actions of agents who they do not share a connection with. + +With the proposed model we provide the theoretical framework needed to reason about such strategic interactions over dynamically changing networks. + +# 1.2 Related work + +From the literature of Markov games, we recognize the settings of single controller (Filar and Raghavan, 1984; Sayin et al., 2022; Guan et al., 2016; Qiu et al., 2021) and switching controller (Vrieze et al., 1983) Markov games to be one of the most related to ours. In these settings, all agents' actions affect individual rewards, but in every state one particular player (single controller), or respectively a potentially different one (switching controller), controls the transition of the environment to a new state. To the best of our knowledge, prior to our work, the only Markov games that have been examined under this assumption are either zero-sum or potential games. + +Further, we manage to go beyond the dichotomy of absolute competition or absolute collaboration by generalizing zero-sum polymatrix games to their Markovian counterpart. In this sense, our work is related to previous works of Cai et al. (2016); Anagnostides et al. (2022); Ao et al. (2022) which show fast convergence to Nash equilibria in zero-sum polymatrix normal-form games for various no-regret learning algorithms including optimistic gradient descent. + +# 2 Preliminaries + +Notation. We define $[n] := \{1, \dots, n\}$ . Scalars are denoted using lightface variables, while, we use boldface for vectors and matrices. For simplicity in the exposition, we use $O(\cdot)$ to suppress dependencies that are polynomial in the parameters of the game. Additionally, given a collection $\mathbf{x}$ of policies or strategies for players $[n]$ , $\mathbf{x}_{-k}$ denotes the policies of every player excluding $k$ . + +# 2.1 Markov games + +In its most general form, a Markov game (MG) with a finite number of $n$ players is defined as a tuple $\Gamma(H, \mathcal{S}, \{\mathcal{A}_k\}_{k \in [n]}, \mathbb{P}, \{r_k\}_{k \in [n]}, \gamma, \rho)$ . Namely, + +- $H \in \mathbb{N}_{+}$ denotes the time horizon, or the length of each episode, +- $S$ , with cardinality $S \coloneqq |\mathcal{S}|$ , stands for the state space, + +- $\{\mathcal{A}_k\}_{k\in [n]}$ is the collection of every player's action space, while $\mathcal{A} := \mathcal{A}_1\times \dots \times \mathcal{A}_n$ denotes the joint action space; further, an element of that set—a joint action—is generally noted as $\pmb{a} = (a_{1},\ldots ,a_{n})\in \mathcal{A}$ . + +- $\mathbb{P} := \{\mathbb{P}_h\}_{h \in [H]}$ is the set of all transition matrices, with $\mathbb{P}_h : \mathcal{S} \times \mathcal{A} \to \Delta(\mathcal{S})$ ; further, $\mathbb{P}_h(\cdot | s, a)$ marks the probability of transitioning to every state given that the joint action $a$ is selected at time $h$ and state $s$ — in infinite-horizon games $\mathbb{P}$ does not depend on $h$ and the index is dropped. +- $r_k \coloneqq \{r_{k,h}\}$ is the reward function of player $k$ at time $h$ ; $r_{k,h}: \mathcal{S}, \mathcal{A} \to [-1,1]$ yields the reward of player $k$ at a given state and joint action — in infinite-horizon games, $r_{k,h}$ is the same for every $h$ and the index is dropped. +- a discount factor $\gamma > 0$ , which is generally set to 1 when $H < \infty$ , and $\gamma < 1$ when $H \to \infty$ . +- an initial state distribution $\rho \in \Delta (\mathcal{S})$ + +Policies and value functions. We will define stationary and nonstationary Markov policies. When the horizon $H$ is finite, a stationary policy equilibrium need not necessarily exist even for a single-agent MG, i.e., a Markov decision process; in this case, we seek nonstationary policies. For the case of infinite-horizon games, it is folklore that a stationary Markov policy Nash equilibrium always exists. + +We note that a policy is Markovian when it depends on the present state only. A nonstationary Markov policy $\pi_{k}$ for player $k$ is defined as $\pi_{k} := \{\pi_{k,h} : S \to \Delta(\mathcal{A}_{k}), \forall h \in [H]\}$ . It is a sequence of mappings of states $s$ to a distribution over actions $\Delta(\mathcal{A}_{k})$ for every timestep $h$ . By $\pi_{k,h}(a|s)$ we will denote the probability of player $k$ taking action $a$ in timestep $h$ and state $s$ . A Markov policy is said to be stationary in the case that it outputs an identical probability distribution over actions whenever a particular state is visited regardless of the corresponding timestep $h$ . + +Further, we define a nonstationary Markov joint policy $\sigma \coloneqq \{\pi_h,\forall h\in [H]\}$ to be a sequence of mappings from states to distributions over joint actions $\Delta (\mathcal{A})\equiv \Delta (\mathcal{A}_1\times \dots \times \mathcal{A}_n)$ for all times steps $h$ in the time horizon. In this case, the players can be said to share a common source of randomness, or that the joint policy is correlated. + +A joint policy $\pi$ will be said to be a product policy if there exist policies $\pi_k: [H] \times S \to \Delta(\mathcal{A}_k)$ , $\forall k \in [n]$ such that $\pi_h = \pi_{1,h} \times \dots \times \pi_{n,h}$ , $\forall h \in [H]$ . Moreover, given a joint policy $\pi$ we let a joint policy $\pi_{-k}$ stand for the marginal joint policy excluding player $k$ , i.e., + +$$ +\pi_ {- k, h} (\boldsymbol {a} | s) = \sum_ {a ^ {\prime} \in \mathcal {A} _ {k}} \pi_ {h} (a ^ {\prime}, \boldsymbol {a} | s), \forall h \in [ H ], \forall s \in \mathcal {S}, \forall \boldsymbol {a} \in \mathcal {A} _ {- k}. +$$ + +By fixing a joint policy $\pi$ we can define the value function of any given state $s$ and timestep $h$ for every player $k$ as the expected cumulative reward they get from that state and timestep $h$ onward, + +$$ +V _ {k, h} ^ {\boldsymbol {\pi}} (s _ {1}) = \mathbb {E} _ {\boldsymbol {\pi}} \left[ \sum_ {\tau = h} ^ {H} \gamma^ {\tau - 1} r _ {k, \tau} (s _ {\tau}, \boldsymbol {a} _ {\tau}) | s _ {1} \right] = \boldsymbol {e} _ {s _ {1}} ^ {\top} \sum_ {\tau = h} ^ {H} \left(\gamma^ {\tau - 1} \prod_ {\tau = h} ^ {h} \mathbb {P} _ {\tau} (\boldsymbol {\pi} _ {\tau})\right) \boldsymbol {r} _ {k, \tau} (\boldsymbol {\pi} _ {\tau}). +$$ + +Depending on whether the game is of finite or infinite horizon we get the followin displays, + +- In finite-horizon games, $\gamma = 1$ , the value function reads. + +$$ +V _ {k, h} ^ {\boldsymbol {\pi}} (s _ {1}) = \boldsymbol {e} _ {s _ {1}} ^ {\top} \sum_ {\tau = h} ^ {H} \left(\prod_ {\tau^ {\prime} = h} ^ {s _ {\tau}} \mathbb {P} _ {\tau^ {\prime}} (\boldsymbol {\pi} _ {\tau^ {\prime}})\right) \boldsymbol {r} _ {k, \tau} (\boldsymbol {\pi} _ {\tau}), +$$ + +- In infinite-horizon games, the value function of each state is, + +$$ +V _ {k} ^ {\boldsymbol {\pi}} (s _ {1}) = \boldsymbol {e} _ {s _ {1}} ^ {\top} (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} (\boldsymbol {\pi}). +$$ + +Where $\mathbb{P}_h(\pi_h),\mathbb{P}(\pi)$ and $\pmb {r}_h(\pmb {\pi}_h),\pmb {r}(\pmb {\pi})$ denote the state-to-state transition probability matrix and expected per-state reward vector for a given policy $\pi_h$ or $\pmb{\pi}$ accordingly. Additionally, $e_{s_1}$ is an all-zero vector apart of a value of 1 in its $s_1$ -th position. Also, we denote $V_{k,h}^{\pi}(\rho) = \sum_{s\in S}\rho (s)V_{k,h}^{\pi}(s)$ . + +Best-response policies. Given an arbitrary joint policy $\sigma$ , we define the best-response policy of a player $k$ to be a policy $\pi_k^\dagger \coloneqq \{\pi_{k,h}^\dagger, \forall h \in [H]\}$ , such that it is a maximizer of $\max_{\pi_k'} V_{k,1}^{\pi_k' \times \sigma_{-k}}(s_1)$ . Additionally, we will use the following notation $V_{k,h}^{\dagger,\sigma_{-k}}(s) \coloneqq \max_{\pi_k'} V_{k,h}^{\pi_k' \times \sigma_{-k}}(s)$ . + +Equilibrium notions. Having defined what a best-response is, it is then quite direct to define different notions of equilibria for Markov games. + +Definition 2.1 (CCE). We will say that a joint (potentially correlated) policy $\sigma \in \Delta (\mathcal{A})^{H\times S}$ is an $\epsilon$ -approximate coarse-correlated equilibrium if it holds that, for an $\epsilon >0$ + +$$ +V _ {k, 1} ^ {\dagger , \sigma - k} (s _ {1}) - V _ {k, 1} ^ {\sigma} (s _ {1}) \leq \epsilon , \forall k \in [ n ]. \tag {CCE} +$$ + +Further, we will define a Nash equilibrium policy, + +Definition 2.2 (NE). A joint, product policy $\pi \in \prod_{k\in [n]}\Delta (\mathcal{A}_k)^{H\times S}$ is an $\epsilon$ -approximate Nash equilibrium if it holds that, for an $\epsilon >0$ + +$$ +V _ {k, 1} ^ {\dagger , \pi_ {- k}} (s _ {1}) - V _ {k, 1} ^ {\pi} (s _ {1}) \leq \epsilon , \forall k \in [ n ]. \tag {NE} +$$ + +It is quite evident that an approximate Nash equilibrium is also an approximate coarse-correlated equilibrium while the converse is not generally true. For infinite-horizon games the definitions are analogous and are deferred to the appendix. + +# 2.2 Our setting + +We focus on the setting of zero-sum polymatrix switching-control Markov games. This setting encompasses two major assumptions related to the reward functions in every state $\{r_k\}_{k\in [n]}$ and the transition kernel $\mathbb{P}$ . The first assumption imposes a zero-sum, polymatrix structure on $\{r_k\}_{k\in [n]}$ for every state and directly generalizes zero-sum polymatrix games for games with multiple states. + +Assumption 1 (Zero-sum polymatrix games). The reward functions of every player in any state $s$ are characterized by a zero-sum, polymatrix structure. + +Polymatrix structure. For every state $s$ there exists an undirected graph $\mathcal{G}_s(\mathcal{V},\mathcal{E}_s)$ where, + +- the set of nodes $\mathcal{V}$ coincides with the set of agents $[n]$ ; the $k$ -th node is the $k$ -th agent, +- the set of edges $\mathcal{E}_s$ stands for the set of pair-wise interactions; each edge $e = (k,j), k,j \in [n], k \neq j$ stands for a general-sum normal-form game played between players $k, j$ and which we note as $\left(r_{kj}(s,\cdot,\cdot), r_{jk}(s,\cdot,\cdot)\right)$ with $r_{kj}, r_{jk}: \mathcal{S} \times \mathcal{A}_k \times \mathcal{A}_j \to [-1,1]$ . + +Moreover, we define $\mathrm{adj}(s,k)\coloneqq \{j\in [n]\mid (k,j)\in \mathcal{E}_s\} \subseteq [n]$ to be the set of all neighbors of an arbitrary agent $k$ in state $s$ . The reward of agent $k$ at state $s$ given a joint action $\pmb{a}$ depends solely on interactions with their neighbors, + +$$ +r _ {k, h} (s, \boldsymbol {a}) = \sum_ {j \in \operatorname {a d j} (k)} r _ {k j, h} (s, a _ {k}, a _ {j}), \forall h \in [ H ], \forall s \in \mathcal {S}, \forall \boldsymbol {a} \in \mathcal {A}. +$$ + +Further, the zero-sum assumption implies that, + +$$ +\sum_ {k} r _ {k, h} (s, \boldsymbol {a}) = 0, \quad \forall h \in [ H ], \forall s \in \mathcal {S}, \forall \boldsymbol {a} \in \mathcal {A}. \tag {1} +$$ + +In the infinite-horizon setting, the subscript $h$ can be dropped. + +A further assumption (switching-control) is necessary in order to ensure the desirable property of equilibrium collapse. + +Assumption 2 (Switching-control). In every state $s \in S$ , there exists a single player (not necessarily the same), or controller, whose actions determine the probability of transitioning to a new state. + +The function $\operatorname{argctrl} : S \to [n]$ returns the index of the player who controls the transition probability at a given state $s$ . On the other hand, the function $\operatorname{ctrl} : S \times \mathcal{A} \to \mathcal{A}_{\mathrm{argctrl}(s)}$ gets an input of a joint action $\mathbf{a}$ , for a particular state $s$ , and returns the action of the controller of that state, $a_{\mathrm{argctrl}(s)}$ . + +Remark 1. It is direct to see that Markov games with a single controller and turn-based Markov games (Daskalakis et al., 2022), are special case of Markov games with switching controller. + +# 3 Main results + +In this section we provide the main results of this paper. We shall show the collapsing phenomenon of coarse-correlated equilibria to Nash equilibria in the case of zero-sum, single switching controller polymatrix Markov games. Before we proceed, we provide a formal definition of the notion of collapsing. + +Definition 3.1 (CCE collapse to NE). Let $\sigma$ be any $\epsilon$ -CCE policy of a Markov game. Moreover, let the marginal policy $\pi^{\sigma} := (\pi_1^{\sigma}, \dots, \pi_n^{\sigma})$ be defined as: + +$$ +\pi_ {k} ^ {\sigma} (a | s) = \sum_ {\boldsymbol {a} _ {- k} \in \mathcal {A} _ {- k}} \sigma (a, \boldsymbol {a} _ {- k} | s), \forall k, \forall s \in \mathcal {S}, \forall a \in \mathcal {A} _ {k}. +$$ + +If $\pi^{\sigma}$ is a $O(\epsilon)$ -NE equilibrium for every $\sigma$ then we say the set of approximate CCE's collapses to that of approximate NE's. + +We start with the warm-up result that the set of CCE's collapses to the set of NE's for two-player zero-sum Markov games. + +# 3.1 Warm-up: equilibrium collapse in two-player zero-sum MG's + +Since we focus on two-player zero-sum Markov games, we simplify the notation by using $V_{h=1}^{\cdot}(s) \coloneqq V_{2,1}^{\cdot}(s)$ — i.e., player 1 is the minimizing player and player 2 is the maximizer. We show the following theorem: + +Theorem 3.1 (Collapse in two-player zero-sum MG's). Let a two-player zero-sum Markov game $\Gamma'$ and an $\epsilon$ -approximate CCE policy of that game $\sigma$ . Then, the marginalized product policies $\pi_1^\sigma, \pi_2^\sigma$ form a $2\epsilon$ -approximate NE. + +Proof. Since $\sigma$ is an $\epsilon$ -approximate CCE joint policy, by definition it holds that for any $\pi_1$ and any $\pi_2$ , + +$$ +V _ {h = 1} ^ {\boldsymbol {\sigma} _ {- 2} \times \boldsymbol {\pi} _ {2}} (s _ {1}) - \epsilon \leq V _ {h = 1} ^ {\boldsymbol {\sigma}} (s _ {1}) \leq V _ {h = 1} ^ {\boldsymbol {\pi} _ {1} \times \boldsymbol {\sigma} _ {- 1}} (s _ {1}) + \epsilon . +$$ + +Due to Claim A.1, the latter is equivalent to the following inequality, + +$$ +V _ {h = 1} ^ {\boldsymbol {\pi} _ {1} ^ {\sigma} \times \boldsymbol {\pi} _ {2}} (s _ {1}) - \epsilon \leq V _ {h = 1} ^ {\sigma} (s _ {1}) \leq V _ {h = 1} ^ {\boldsymbol {\pi} _ {1} \times \boldsymbol {\pi} _ {2} ^ {\sigma}} (s _ {1}) + \epsilon . +$$ + +Plugging in $\pi_1^\sigma, \pi_2^\sigma$ alternatingly, we get the inequalities: + +$$ +\left\{ \begin{array}{l} V _ {h = 1} ^ {\pi_ {1} ^ {\sigma} \times \pi_ {2}} (s _ {1}) - \epsilon \leq V _ {h = 1} ^ {\sigma} (s _ {1}) \leq V _ {h = 1} ^ {\pi_ {1} ^ {\sigma} \times \pi_ {2} ^ {\sigma}} (s _ {1}) + \epsilon \\ V _ {h = 1} ^ {\pi_ {1} ^ {\sigma} \times \pi_ {2} ^ {\sigma}} (s _ {1}) - \epsilon \leq V _ {h = 1} ^ {\sigma} (s _ {1}) \leq V _ {h = 1} ^ {\pi_ {1} \times \pi_ {2} ^ {\sigma}} (s _ {1}) + \epsilon \end{array} \right. +$$ + +The latter leads us to conclude that for any $\pi_1$ and any $\pi_2$ , + +$$ +V _ {h = 1} ^ {\pi_ {1} ^ {\sigma} \times \pi_ {2}} (s _ {1}) - 2 \epsilon \leq V _ {h = 1} ^ {\pi_ {1} ^ {\sigma} \times \pi_ {2} ^ {\sigma}} (s _ {1}) \leq V _ {h = 1} ^ {\pi_ {1} \times \pi_ {2} ^ {\sigma}} (s _ {1}) + 2 \epsilon , +$$ + +which is the definition of a NE in a zero-sum game. + +![](images/89dffe0cc002b9946c20500ceb5641e73accca86a42d8ba5c5febb140404000f.jpg) + +# 3.2 Equilibrium collapse in finite-horizon polymatrix Markov games + +In this section, we focus on the more challenging case of polymatrix Markov games which is the main focus of this paper. For any finite horizon Markov game, we define $(\mathrm{P_{NE}})$ to be the following nonlinear program with variables $\pi, w$ : + +$$ +\min \sum_ {k \in [ n ]} \left(w _ {k, 1} \left(s _ {1}\right) - e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h}\right)\right) +$$ + +$$ +\begin{array}{l} \text {s . t .} w _ {k, h} (s) \geq r _ {k, h} (s, a, \boldsymbol {\pi} _ {- k, h}) + \mathbb {P} _ {h} (s, a, \boldsymbol {\pi} _ {- k, h}) \boldsymbol {w} _ {k, h + 1}, \\ \forall s \in \mathcal {S}, \forall h \in [ H ], \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}; \\ \end{array} +$$ + +$\left(\mathrm{P}_{\mathrm{NE}}\right)$ + +$$ +w _ {k, H} (s) = 0, \quad \forall k \in [ n ], \forall s \in S; +$$ + +$$ +\pi_ {k, h} (s) \in \Delta (\mathcal {A} _ {k}), +$$ + +$$ +\forall s \in \mathcal {S}, \forall h \in [ H ], \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}. +$$ + +Using the following theorem, we are able to use $(\mathrm{P_{NE}})$ to argue about equilibrium collapse. + +Theorem 3.2 (NE and global optima of $(\mathrm{P_{NE}})$ ). If $(\pi^{\star},\pmb{w}^{\star})$ yields an $\epsilon$ -approximate global minimum of $(\mathrm{P_{NE}})$ , then $\pi^{\star}$ is an $n\epsilon$ -approximate NE of the zero-sum polymatrix switching controller $MG$ , $\Gamma$ . Conversely, if $\pi^{\star}$ is an $\epsilon$ -approximate NE of the $MG\Gamma$ with corresponding value function vector $\pmb{w}^{\star}$ such that $w_{k,h}^{\star}(s) = V_{k,h}^{\pi^{\star}}(s)\forall (k,h,s)\in [n]\times [H]\times S$ , then $(\pi^{\star},\pmb{w}^{\star})$ attains an $\epsilon$ -approximate global minimum of $(\mathrm{P_{NE}})$ . + +Following, we are going to use $(\mathrm{P_{NE}})$ in proving the collapse of CCE's to NE's. We observe that the latter program is nonlinear and in general nonconvex. Hence, duality cannot be used in the way it was used in (Cai et al., 2016) to prove equilibrium collapse. Nevertheless, we can prove that given a CCE policy $\sigma$ , the marginalized, product policy $\times_{k\in [n]}\pi_k^\sigma$ along with an appropriate vector $w^{\sigma}$ achieves a global minimum in the nonlinear program $(\mathrm{P_{NE}})$ . More precisely, our main result reads as the following statement. + +Theorem 3.3 (CCE collapse to NE in polymatrix MG). Let a zero-sum polymatrix switching-control Markov game, i.e., a Markov game for which Assumptions 1 and 2 hold. Further, let an $\epsilon$ -approximate CCE of that game $\sigma$ . Then, the marginal product policy $\pi^{\sigma}$ , with $\pi_{k,h}^{\sigma}(a|s) = \sum_{a_{-k} \in \mathcal{A}_{-k}} \sigma_h(a, a_{-k})$ , $\forall k \in [n], \forall h \in [H]$ is an $n\epsilon$ -approximate NE. + +Proof. Let an $\epsilon$ -approximate CCE policy, $\sigma$ , of game $\Gamma$ . Moreover, let the best-response value-vectors of each agent $k$ to joint policy $\sigma_{-k}$ , $\boldsymbol{w}_k^\dagger$ . + +Now, we observe that due to Assumption 1, + +$$ +\begin{array}{l} w _ {k, h} ^ {\dagger} (s) \geq r _ {k, h} (s, a, \boldsymbol {\sigma} _ {- k, h}) + \mathbb {P} _ {h} (s, a, \boldsymbol {\sigma} _ {- k, h}) \boldsymbol {w} _ {k, h + 1} ^ {\dagger} \\ = \sum_ {j \in \operatorname {a d j} (k)} r _ {(k, j), h} (s, a, \boldsymbol {\pi} _ {j} ^ {\sigma}) + \mathbb {P} _ {h} (s, a, \boldsymbol {\sigma} _ {- k, h}) \boldsymbol {w} _ {k, h + 1} ^ {\dagger}. \\ \end{array} +$$ + +Further, due to Assumption 2, + +$$ +\mathbb {P} _ {h} (s, a, \boldsymbol {\sigma} _ {- k, h}) \boldsymbol {w} _ {k, h + 1} ^ {\dagger} = \mathbb {P} _ {h} (s, a, \boldsymbol {\pi} _ {\operatorname {a r g c t r l} (s), h} ^ {\boldsymbol {\sigma}}) \boldsymbol {w} _ {k, h + 1} ^ {\dagger}, +$$ + +or, + +$$ +\mathbb {P} _ {h} (s, a, \boldsymbol {\sigma} _ {- k, h}) \boldsymbol {w} _ {k, h + 1} ^ {\dagger} = \mathbb {P} _ {h} (s, a, \boldsymbol {\pi} ^ {\boldsymbol {\sigma}}) \boldsymbol {w} _ {k, h + 1} ^ {\dagger}. +$$ + +Putting these pieces together, we reach the conclusion that $(\pi^{\sigma}, \boldsymbol{w}^{\dagger})$ is feasible for the nonlinear program $(\mathrm{P}_{\mathrm{NE}})$ . + +What is left is to prove that it is also an $\epsilon$ -approximate global minimum. Indeed, if $\sum_{k} w_{k,h}^{\dagger}(s_1) \leq \epsilon$ (by assumption of an $\epsilon$ -approximate CCE), then the objective function of $(\mathrm{P}_{\mathrm{NE}})$ will attain an $\epsilon$ -approximate global minimum. In turn, due to Theorem 3.2 the latter implies that $\pi^{\sigma}$ is an $n\epsilon$ -approximate NE. + +We can now conclude that due to the algorithm introduced in (Daskalakis et al., 2022) for CCE computation in general-sum MG's, the next statement holds true. + +Corollary 3.1 (Computing a NE—finite-horizon). Given a finite-horizon switching control zero-sum polymatrix Markov game, we can compute an $\epsilon$ -approximate Nash equilibrium policy that is Markovian with probability at least $1 - \delta$ in time poly $(n, H, S, \max_k |\mathcal{A}_k|, \frac{1}{\epsilon}, \log(1 / \delta))$ . + +In the next section, we discuss the necessity of the assumption of switching control using a counterexample of non-collapsing equilibria. + +# 3.3 No equilibrium collapse with more than one controllers per-state + +Although Assumption 1 is sufficient for the collapse of any CCE to a NE in single-state (i.e., normalform) games, we will prove that Assumption 2 is indispensable in guaranteeing such a collapse in zero-sum polymatrix Markov games. That is, if more than one players affect the transition probability from one state to another, a CCE is not guaranteed to collapse to a NE. + +Example 1. We consider the following 3-player Markov game that takes place for a time horizon $H = 3$ . There exist three states, $s_1, s_2$ , and $s_3$ and the game starts at state $s_1$ . Player 3 has a single action in every state, while players 1 and 2 have two available actions $\{a_1, a_2\}$ and $\{b_1, b_2\}$ respectively in every state. + +Reward functions. If player 1 (respectively, player 2) takes action $a_1$ (resp., $b_1$ ), in either of the states $s_1$ or $s_2$ , they get a reward equal to $\frac{1}{20}$ . In state $s_3$ , both players get a reward equal to $-\frac{1}{2}$ regardless of the action they select. Player 3 always gets a reward that is equal to the negative sum of the reward of the other two players. This way, the zero-sum polymatrix property of the game is ensured (Assumption 1). + +Transition probabilities. If players 1 and 2 select the joint action $(a_{1}, b_{1})$ in state $s_{1}$ , the game will transition to state $s_{2}$ . In any other case, it will transition to state $s_{3}$ . The converse happens if in state $s_{2}$ they take joint action $(a_{1}, b_{1})$ ; the game will transition to state $s_{3}$ . For any other joint action, it will transition to state $s_{1}$ . From state $s_{3}$ , the game transitions to state $s_{1}$ or $s_{2}$ uniformly at random. + +At this point, it is important to notice that two players control the transition probability from one state to another. In other words, Assumption 2 does not hold. + +![](images/774b02538bb0a0af55335373754ce24eeb23d7328a94d83b7febd4005bfac5d8.jpg) +Figure 1: A graph of the state space with transition probabilities parametrized with respect to the policy of each player. + +Next, we consider the joint policy $\pmb{\sigma}$ + +$$ +\boldsymbol {\sigma} (s _ {1}) = \boldsymbol {\sigma} (s _ {2}) = \begin{array}{c c c} & a _ {1} & b _ {1} \\ & a _ {2} & 0 \\ & & 1 / 2 \end{array} \left( \begin{array}{c c} 0 & 1 / 2 \\ 1 / 2 & 0 \end{array} \right). +$$ + +Claim 3.1. The joint policy $\sigma$ that assigns probability $\frac{1}{2}$ to the joint actions $(a_{1}, b_{2})$ and $(a_{2}, b_{1})$ in both states $s_{1}, s_{2}$ is a CCE and $V_{1,1}^{\sigma}(s_{1}) = V_{2,1}^{\sigma}(s_{1}) = \frac{1}{20}$ . + +Yet, the marginalized product policy of $\sigma$ which we note as $\pi_1^\sigma \times \pi_2^\sigma$ does not constitute a NE. The components of this policy are, + +$$ +\left\{ \begin{array}{l} \pi_ {1} ^ {\sigma} (s _ {1}) = \pi_ {1} ^ {\sigma} (s _ {2}) = \left( \begin{array}{c c} a _ {1} & a _ {2} \\ 1 / 2 & 1 / 2 \end{array} \right), \\ \pi_ {2} ^ {\sigma} (s _ {1}) = \pi_ {2} ^ {\sigma} (s _ {2}) = \left( \begin{array}{c c} b _ {1} & b _ {2} \\ 1 / 2 & 1 / 2 \end{array} \right). \end{array} \right. +$$ + +I.e., the product policy $\pi_1^\sigma \times \pi_2^\sigma$ selects any of the two actions of each player in states $s_1, s_2$ independently and uniformly at random. With the following claim, it can be concluded that in general when more than one player controls the transition the set of equilibria do not collapse. + +Claim 3.2. The product policy $\pi_1^\sigma \times \pi_2^\sigma$ is not a NE. + +In conclusion, Assumption 1 does not suffice to ensure equilibrium collapse. + +Theorem 3.4. There exists a zero-sum polymatrix Markov game (Assumption 2 is not satisfied) that has a CCE which does not collapse to a NE. + +# 3.4 Equilibrium collapse in infinite-horizon polymatrix Markov games + +In proving equilibrium collapse for infinite-horizon polymatrix Markov games, we use similar arguments and the following nonlinear program with variables $\pi, w$ , + +$$ +\begin{array}{l} \min \sum_ {k \in [ n ]} \boldsymbol {\rho} ^ {\top} \left(\boldsymbol {w} _ {k} - (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} _ {k} (\boldsymbol {\pi})\right) \\ \text {s . t .} w _ {k} (s) \geq r _ {k} (s, a, \boldsymbol {\pi} _ {- k}) + \gamma \mathbb {P} (s, a, \boldsymbol {\pi} _ {- k}) \boldsymbol {w} _ {k}, \\ \begin{array}{c} (\mathrm {P} _ {\mathrm {N E}} ^ {\prime}) \hskip 1 4. 2 2 6 3 7 8 p t \forall s \in \mathcal {S}, \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}; \\ \pi_ {k} (s) \in \Delta (\mathcal {A} _ {k}), \end{array} \\ \forall s \in \mathcal {S}, \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}. \\ \end{array} +$$ + +We note that Example 1 can be properly adjusted to show that the switching-control assumption is necessary for equilibrium collapse in infinite-horizon games as well. Compared to finite-horizon games, infinite-horizon games cannot be possibly solved using backward induction. They pose a genuine computational challenge and, in that sense, the importance of the property of equilibrium collapse gets highlighted. + +Computational implications. Equilibrium collapse in infinite-horizon MG's allows us to use the CCE computation technique found in (Daskalakis et al., 2022) in order to compute an $\epsilon$ -approximate NE. Namely, given an accuracy threshold $\epsilon$ , we truncate the infinite-horizon game to its effective horizon $H \coloneqq \frac{\log(1 / \epsilon)}{1 - \gamma}$ . Then, we define reward functions that depend on the time-step $h$ , i.e., $r_{k,h} = \gamma^{h-1} r_k$ . Finally, + +Corollary 3.2. (Computing a NE—infinite-horizon) Given an infinite-horizon switching control zero-sum polymatrix game $\Gamma$ , it is possible to compute a Nash equilibrium policy that is Markovian and nonstationary with probability at least $1 - \delta$ in time poly $\left(n, \frac{1}{1 - \gamma}, S, \max_k |\mathcal{A}_k|, \frac{1}{\epsilon}, \log(1 / \delta)\right)$ . + +# 4 Conclusion and open problems + +In this paper, we unified switching-control Markov games and zero-sum polymatrix normal-form games. We highlighted how numerous applications can be modeled using this framework and we focused on the phenomenon of equilibrium collapse from the set of coarse-correlated equilibria to that of Nash equilibria. This property holds implications for computing approximate Nash equilibria in switching control zero-sum polymatrix Markov games; it ensures that it can be done efficiently. + +Open problems. In light of the proposed problem and our results there are multiple interesting open questions: + +- Is it possible to use a policy optimization algorithm similar to those of (Erez et al., 2022; Zhang et al., 2022) in order to converge to an approximate Nash equilibrium? We note that the question can be settled in one of two ways; either extend the current result of equilibrium collapse to policies that are non-Markovian or guarantee convergence to Markovian policies. The notion of regret in (Erez et al., 2022) gives rise to the computation of a CCE that is a non-Markovian policy in the sense that the policy at every timestep depends on the policy sampled from the history of no-regret play and not only the given state. +- We conjecture that a convergence rate of $O\left(\frac{1}{T}\right)$ to a NE is possible, i.e., there exists an algorithm with running time $O(1 / \epsilon)$ that computes an $\epsilon$ -approximate NE. +- Are there more classes of Markov games in which computing Nash equilibria is computationally tractable? + +# References + +Alekh Agarwal, Sham M. Kakade, Jason D. Lee, and Gaurav Mahajan. Optimality and approximation with policy gradient methods in markov decision processes. In *Conference on Learning Theory*, COLT 2020, 9-12 July 2020, volume 125 of *Proceedings of Machine Learning Research*, pages 64-66. PMLR, 2020. +Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. On last-iterate convergence beyond zero-sum games. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 536-581. PMLR, 2022. URL https://proceedings.mlr.press/v162/anagnostides22a.html. +Ruicheng Ao, Shicong Cen, and Yuejie Chi. Asynchronous gradient play in zero-sum multi-agent games, 2022. +D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 2nd edition, 2000. ISBN 1886529094. +Michael Bowling. Convergence problems of general-sum multiagent reinforcement learning. In In Proceedings of the Seventeenth International Conference on Machine Learning, pages 89-94. Morgan Kaufmann, 2000. +Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. Heads-up limit hold'em poker is solved. Science, 347(6218):145-149, 2015. doi: 10.1126/science.1259433. +Noam Brown and Tuomas Sandholm. Superhuman ai for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418-424, 2018. doi: 10.1126/science.aao1733. +Noam Brown and Tuomas Sandholm. Superhuman ai for multiplayer poker. Science, 365(6456): 885-890, 2019. doi: 10.1126/science.aay2400. +Noam Brown, Anton Bakhtin, Adam Lerer, and Qucheng Gong. Combining deep reinforcement learning and search for imperfect-information games. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 2020. +Yang Cai, Ozan Candogan, Constantinos Daskalakis, and Christos Papadimitriou. Zero-sum polymatrix games: A generalization of minmax. Mathematics of Operations Research, 41(2):648-655, 2016. +Shicong Cen, Yuting Wei, and Yuejie Chi. Fast policy extragradient methods for competitive games with entropy regularization. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, pages 27952-27964, 2021. +Dingyang Chen, Qi Zhang, and Thinh T. Doan. Convergence and price of anarchy guarantees of the softmax policy gradient in markov potential games. In Decision Awareness in Reinforcement Learning Workshop at ICML 2022, 2022. +Xi Chen, Xiaotie Deng, and Shang-Hua Teng. Settling the complexity of computing two-player nash equilibria. J. ACM, 56(3):14:1-14:57, 2009. doi: 10.1145/1516512.1516516. +Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference, AAAI 98, pages 746-752. AAAI Press / The MIT Press, 1998. +Anne Condon. On algorithms for simple stochastic games. In Advances in Computational Complexity Theory, volume 13 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 51-73. American Mathematical Society, 1993. + +Qiwen Cui, Kaiqing Zhang, and Simon Du. Breaking the curse of multiagents in a large state space: Rl in markov games with independent linear function approximation. In The Thirty Sixth Annual Conference on Learning Theory, pages 2651-2652. PMLR, 2023. +Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a nash equilibrium. SIAM Journal on Computing, 39(1):195-259, 2009. +Constantinos Daskalakis, Dylan J Foster, and Noah Golowich. Independent policy gradient methods for competitive reinforcement learning. Advances in neural information processing systems, 33: 5527-5540, 2020. +Constantinos Daskalakis, Noah Golowich, and Kaiqing Zhang. The complexity of markov equilibrium in stochastic games. arXiv preprint arXiv:2204.03991, 2022. +Xiaotie Deng, Ningyuan Li, David Mguni, Jun Wang, and Yaodong Yang. On the complexity of computing markov perfect equilibrium in general-sum stochastic games. National Science Review, 10(1):nwac256, 2023. +Dongsheng Ding, Chen-Yu Wei, Kaiqing Zhang, and Mihailo R Jovanovic. Independent policy gradient for large-scale markov potential games: Sharper rates, function approximation, and game-agnostic convergence. arXiv preprint arXiv:2202.04129, 2022. +David Easley and Jon Kleinberg. Networks, crowds, and markets: Reasoning about a highly connected world. Cambridge university press, 2010. +Liad Erez, Tal Lancewicki, Uri Sherman, Tomer Koren, and Yishay Mansour. Regret minimization and convergence to equilibria in general-sum markov games, 2022. +Kousha Etessami and Mihalis Yannakakis. On the complexity of nash equilibria and other fixed points. SIAM J. Comput., 39(6):2531-2597, 2010. doi: 10.1137/080720826. +Jerzy Filar and Koos Vrieze. Competitive Markov decision processes. Springer Science & Business Media, 2012. +Jerzy A Filar and TES Raghavan. A matrix game solution of the single-controller stochastic game. Mathematics of Operations Research, 9(3):356-362, 1984. +Jerzy A Filar, Todd A Schultz, Frank Thuijsman, and OJ Vrieze. Nonlinear programming and stationary equilibria in stochastic games. Mathematical Programming, 50(1-3):227-237, 1991. +Roy Fox, Stephen M. McAleer, Will Overman, and Ioannis Panageas. Independent natural policy gradient always converges in markov potential games. In International Conference on Artificial Intelligence and Statistics, AISTATS 2022, volume 151 of Proceedings of Machine Learning Research, pages 4414-4425. PMLR, 2022. +Peng Guan, Maxim Raginsky, Rebecca Willett, and Daphney-Stavroula Zois. Regret minimization algorithms for single-controller zero-sum stochastic games. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 7075-7080. IEEE, 2016. +Junling Hu and Michael P. Wellman. Nash q-learning for general-sum stochastic games. J. Mach. Learn. Res., 4:1039-1069, 2003. +Chi Jin, Zeyuan Allen-Zhu, Sébastien Bubeck, and Michael I. Jordan. Is q-learning provably efficient? In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, pages 4868-4878, 2018. +Yujia Jin, Vidya Muthukumar, and Aaron Sidford. The complexity of infinite-horizon general-sum stochastic games. CoRR, abs/2204.04186, 2022. doi: 10.48550/arXiv.2204.04186. URL https://doi.org/10.48550/arXiv.2204.04186. +Fivos Kalogiannis, Ioannis Anagnostides, Ioannis Panageas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Vaggos Chatziafratis, and Stelios Stavroulakis. Efficiently computing nash equilibria in adversarial team markov games. arXiv preprint arXiv:2208.02204, 2022. + +Stefanos Leonardos, Will Overman, Ioannis Panageas, and Georgios Piliouras. Global convergence of multi-agent policy gradient in markov potential games. arXiv preprint arXiv:2106.01969, 2021. +Yuanzhi Li, Ruosong Wang, and Lin F. Yang. Settling the horizon-dependence of sample complexity in reinforcement learning. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, pages 965-976. IEEE, 2021. doi: 10.1109/FOCS52979.2021.00097. +Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pages 157-163. Elsevier, 1994. +Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, and Tengyu Ma. Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees. In 7th International Conference on Learning Representations, ICLR 2019. OpenReview.net, 2019. +Chinmay Maheshwari, Manxi Wu, Druv Pai, and Shankar Sastry. Independent and decentralized learning in markov potential games, 2022. +Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisý, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508-513, 2017. doi: 10.1126/science.aam6960. +John Nash. Non-cooperative games. Annals of mathematics, pages 286-295, 1951. +Gergely Neu and Ciara Pike-Burke. A unifying view of optimism in episodic reinforcement learning. Advances in Neural Information Processing Systems, 33:1392-1403, 2020. +L. Panait and S. Luke. Cooperative Multi-Agent Learning: The State of the Art. Autonomous Agents and Multi-Agent Systems, 11(3):387-434, Nov 2005. doi: 10.1007/s10458-005-2631-2. +Julien Perolat, Bart de Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T. Connor, Neil Burch, Thomas Anthony, Stephen McAleer, Romuald Elie, Sarah H. Cen, Zhe Wang, Audrunas Gruslys, Aleksandra Malysheva, Mina Khan, Sherjil Ozair, Finbarr Timbers, Toby Pohlen, Tom Eccles, Mark Rowland, Marc Lanctot, Jean-Baptiste Lespiau, Bilal Piot, Shayegan Omidshafiei, Edward Lockhart, Laurent Sifre, Nathalie Beauguerlange, Remi Munos, David Silver, Satinder Singh, Demis Hassabis, and Karl Tuyls. Mastering the game of stratego with model-free multiagent reinforcement learning, 2022. +Shuang Qiu, Xiaohan Wei, Jieping Ye, Zhaoran Wang, and Zhuoran Yang. Provably efficient fictitious play policy optimization for zero-sum markov games with structured transitions. In International Conference on Machine Learning, pages 8715-8725. PMLR, 2021. +Aviad Rubinstein. Settling the complexity of computing approximate two-player nash equilibria. SIGecom Exch., 15(2):45-49, 2017. doi: 10.1145/3055589.3055596. +Muhammed Sayin, Kaiqing Zhang, David Leslie, Tamer Basar, and Asuman Ozdaglar. Decentralized q-learning in zero-sum markov games. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 18320-18334. Curran Associates, Inc., 2021. +Muhammed O Sayin, Francesca Parise, and Asuman Ozdaglar. Fictitious play in zero-sum stochastic games. arXiv preprint arXiv:2010.04223, 2020. +Muhammed O Sayin, Kaiqing Zhang, and Asuman Ozdaglar. Fictitious play in markov games with single controller. In Proceedings of the 23rd ACM Conference on Economics and Computation, pages 919-936, 2022. +Frank Schweitzer, Giorgio Fagiolo, Didier Sornette, Fernando Vega-Redondo, Alessandro Vespignani, and Douglas R White. Economic networks: The new challenges. science, 325(5939):422-425, 2009. +Lloyd S Shapley. Stochastic games. Proceedings of the national academy of sciences, 39(10): 1095-1100, 1953. + +Aaron Sidford, Mengdi Wang, Xian Wu, Lin Yang, and Yinyu Ye. Near-optimal time and sample complexities for solving markov decision processes with a generative model. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, pages 5192-5202, 2018. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy P. Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nat., 550(7676):354-359, 2017. doi: 10.1038/nature24270. +R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA, 2018. ISBN 0262039249. +György Szabó and Gabor Fath. Evolutionary games on graphs. Physics reports, 446(4-6):97-216, 2007. +Yodyium Tipsuwan and Mo-Yuen Chow. Control methodologies in networked control systems. Control engineering practice, 11(10):1099-1111, 2003. +Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander Sasha Vezhnevets, Remi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom Le Paine, Caglar Gulçehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wunsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy P. Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. Grandmaster level in starcraft II using multi-agent reinforcement learning. Nat., 575(7782): 350-354, 2019. doi: 10.1038/s41586-019-1724-z. +OJ Vrieze, SH Tijs, TES Raghavan, and JA Filar. A finite algorithm for the switching control stochastic game. Or Spektrum, 5(1):15-24, 1983. +Xiaofeng Wang and Tuomas Sandholm. Reinforcement learning to play an optimal nash equilibrium in team markov games. In Advances in Neural Information Processing Systems 15 [Neural Information Processing Systems, NIPS 2002, December 9-14, 2002, pages 1571-1578. MIT Press, 2002. +Yuanhao Wang, Qinghua Liu, Yu Bai, and Chi Jin. Breaking the curse of multiagency: Provably efficient decentralized multi-agent rl with function approximation. arXiv preprint arXiv:2302.06606, 2023. +Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Last-iterate convergence of decentralized optimistic gradient descent/ascent in infinite-horizon competitive markov games. In Mikhail Belkin and Samory Kpotufe, editors, Proceedings of Thirty Fourth Conference on Learning Theory, volume 134 of Proceedings of Machine Learning Research, pages 4259-4299. PMLR, 15-19 Aug 2021. +Runyu Zhang, Zhaolin Ren, and Na Li. Gradient play in stochastic games: stationary points, convergence, and sample complexity. arXiv preprint arXiv:2106.00198, 2021. +Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, and Yu Bai. Policy optimization for markov games: Unified framework and faster convergence, 2022. + +# A Missing statements and proofs + +# A.1 Statements for Section 3.1 + +Claim A.1. Let a two-player Markov game where both players affect the transition. Further, consider a correlated policy $\sigma$ and its corresponding marginalized product policy $\pi^{\sigma} = \pi_1^{\sigma} \times \pi_2^{\sigma}$ . Then, for any $\pi_1', \pi_2'$ , + +$$ +V _ {k, 1} ^ {\boldsymbol {\pi} _ {1} ^ {\prime}, \boldsymbol {\sigma} _ {- 1}} (s _ {1}) = V _ {k, 1} ^ {\boldsymbol {\pi} _ {1} ^ {\prime}, \boldsymbol {\pi} _ {2} ^ {\sigma}} (s _ {1}), +$$ + +$$ +V _ {k, 2} ^ {\sigma_ {- 2}, \pi_ {2} ^ {\prime}} (s _ {1}) = V _ {k, 2} ^ {\pi_ {1} ^ {\sigma}, \pi_ {2} ^ {\prime}} (s _ {1}). +$$ + +Proof. We will effectively show that the problem of best-responding to a correlated policy $\sigma$ is equivalent to best-responding to the marginal policy of $\sigma$ for the opponent. The proof follows from the equivalence of the two MDPs. + +As a reminder, + +$$ +\pi_ {1, h} (a | s) = \sum_ {b \in \mathcal {A} _ {2}} \sigma_ {h} (a, b | s) +$$ + +$$ +\pi_ {2, h} (b | s) = \sum_ {a \in \mathcal {A} _ {1}} \sigma_ {h} (a, b | s) +$$ + +As we have seen in Section 2.1, in the case of unilateral deviation from joint policy $\sigma$ , an agent faces a single agent MDP. More specifically, agent 2, best-responds by optimizing a reward function $\bar{r}_{2,h}(s,b)$ under a transition kernel $\mathbb{P}_2$ for which, + +$$ +\bar {r} _ {2, h} (s, b) = \mathbb {E} _ {b \sim \pmb {\sigma}} [ r _ {2, h} (s, a, b) ] = \mathbb {E} _ {b \sim \pmb {\pi} _ {1} ^ {\sigma}} [ r _ {2, h} (s, a, b) ] = r _ {2, h} (s, \pmb {\pi} _ {1} ^ {\sigma}, b). +$$ + +Similarly, + +$$ +\bar {r} _ {1, h} (s, b) = r _ {1, h} (s, a, \pi_ {2} ^ {\sigma}). +$$ + +Analogously, for each of the transition kernels, + +$$ +\bar {\mathbb {P}} _ {2, h} (s ^ {\prime} | s, b) = \mathbb {E} _ {a \sim \sigma} [ \mathbb {P} _ {2, h} (s ^ {\prime} | s, a, b) ] = \mathbb {E} _ {a \sim \pi_ {2} ^ {\sigma}} [ \mathbb {P} _ {2, h} (s ^ {\prime} | s, a, b) ] = \mathbb {P} _ {2, h} (s ^ {\prime} | s, \boldsymbol {\pi} _ {1} ^ {\sigma}, b), +$$ + +as for agent 1, + +$$ +\bar {\mathbb {P}} _ {1, h} \left(s ^ {\prime} | s, a\right) = \mathbb {P} _ {1, h} \left(s ^ {\prime} | s, a, \boldsymbol {\pi} _ {2} ^ {\sigma}\right). +$$ + +Hence, it follows that, $V_{2,1}^{\sigma - 2 \times \pi_2'}(s_1) = V_{2,1}^{\pi_1^{\sigma} \times \pi_2'}(s_1), \forall \pi_2'$ and $V_{1,1}^{\pi_1' \times \sigma - 1}(s_1) = V_{1,1}^{\pi_1' \times \pi_2'}(s_1), \forall \pi_2'$ . + +![](images/328be6a1c2a9fcc107f54d2ce6832d0c2ae562922bcc765a8d2d9bfb3f894b14.jpg) + +# A.2 Proof of Theorem 3.2 + +The best-response program. First, we state the following lemma that will prove useful for several of our arguments, + +Lemma A.1 (Best-response LP). Let a (possibly correlated) joint policy $\hat{\sigma}$ . Consider the following linear program with variables $\boldsymbol{w} \in \mathbb{R}^{n \times H \times S}$ , + +$$ +\begin{array}{l} \min \quad \sum_ {k \in [ n ]} w _ {k, s} (s _ {1}) - \boldsymbol {e} _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} (\hat {\boldsymbol {\sigma}} _ {\tau})\right) \boldsymbol {r} _ {k, h} (\hat {\boldsymbol {\sigma}} _ {h}) \\ \begin{array}{l} \text {s . t .} w _ {k, h} (s) \geq r _ {k, h} (s, a, \hat {\sigma} _ {- k, h}) + \mathbb {P} _ {h} (s, a, \hat {\sigma} _ {- k, h}) \boldsymbol {w} _ {k, h + 1}, \\ \forall s \in \mathcal {S}, \forall h \in [ H ], \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}; \end{array} \\ w _ {k, H} (s) = 0, \forall k \in [ n ], \forall s \in \mathcal {S}. \\ \end{array} +$$ + +The optimal solution $\pmb{w}^{\dagger}$ of the program is unique and corresponds to the value function of each player $k\in [n]$ when player $k$ best-responds to $\hat{\sigma}$ . + +Proof. We observe that the program is separable to $n$ independent linear programs, each with variables $\boldsymbol{w}_k \in \mathbb{R}^{n \times H}$ , + +$$ +\begin{array}{l} \min w _ {k, 1} \left(s _ {1}\right) \\ \mathrm {s . t .} w _ {k, h} (s) \geq r _ {k, h} (s, a, \hat {\sigma} _ {- k, h}) + \mathbb {P} _ {h} (s, a, \hat {\sigma} _ {- k, h}) w _ {k, h + 1}, \\ \forall s \in \mathcal {S}, \forall h \in [ H ], \forall a \in \mathcal {A} _ {k}; \\ w _ {k, H} (s) = 0, \forall k \in [ n ], \forall s \in \mathcal {S}. \\ \end{array} +$$ + +Each of these linear programs describes the problem of a single agent MDP (Neu and Pike-Burke, 2020, Section 2) that agent being $k$ — which, as we have seen in Best-response policies, is equivalent to the problem of finding a best-response to $\hat{\sigma}_{-k}$ . It follows that the optimal $\boldsymbol{w}_k^\dagger$ for every program is unique (each program corresponds to a set of Bellman optimality equations). + +Properties of the NE program. Second, we need to prove that the minimum value of the objective function of the program is nonnegative. + +Lemma A.2 (Feasibility of $\mathrm{(P_{NE}^{\prime})}$ and global optimum). The nonlinear program $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ is feasible, has a nonnegative objective value, and its global minimum is equal to 0. + +Proof. Analogously to the finite-horizon case, for the feasibility of the nonlinear program, we invoke the theorem of the existence of a Nash equilibrium. We let a NE product policy, $\pi^{\star}$ , and a vector $\pmb{w}^{\star} \in \mathbb{R}^{n \times S}$ such that $w_{k}^{\star}(s) = V_{k}^{\dagger, \pi_{-k}^{\star}}(s)$ , $\forall k \in [n] \times S$ . + +By Lemma A.1, we know that $(\pmb{\pi}^{\star}, \pmb{w}^{\star})$ satisfies all the constraints of $(\mathrm{P}_{\mathrm{NE}})$ . Additionally, because $\pmb{\pi}^{\star}$ is a NE, $V_{k,h}^{\pmb{\pi}^{\star}}(s_1) = V_{k,h}^{\dagger,\pmb{\pi}^{\star}_{-k}}(s_1)$ for all $k \in [n]$ . Observing that, + +$$ +w _ {k, 1} ^ {\star} (s _ {1}) - \boldsymbol {e} _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} (\boldsymbol {\pi} _ {\tau} ^ {\star})\right) \boldsymbol {r} _ {k, h} (\boldsymbol {\pi} _ {h} ^ {\star}) = V _ {k, h} ^ {\dagger , \boldsymbol {\pi} _ {- k} ^ {\star}} (s _ {1}) - V _ {k, h} ^ {\boldsymbol {\pi} ^ {\star}} (s _ {1}) = 0, +$$ + +concludes the argument that a NE attains an objective value equal to 0. + +Continuing, we observe that due to (1) the objective function can be equivalently rewritten as, + +$$ +\begin{array}{l} \sum_ {k \in [ n ]} \left(w _ {k, 1} (s _ {1}) - e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} (\boldsymbol {\pi} _ {\tau})\right) \boldsymbol {r} _ {k, h} (\boldsymbol {\pi} _ {h})\right) \\ = \sum_ {k \in [ n ]} w _ {k, 1} (s _ {1}) - e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} (\boldsymbol {\pi} _ {\tau})\right) \sum_ {k \in [ n ]} \boldsymbol {r} _ {k, h} (\boldsymbol {\pi} _ {h}) \\ = \sum_ {k \in [ n ]} w _ {k, 1} \left(s _ {1}\right). \\ \end{array} +$$ + +Next, we focus on the inequality constraint + +$$ +w _ {k, h} (s) \geq r _ {k, h} (s, a, \pi_ {- k, h}) + \mathbb {P} _ {h} (s, a, \pi_ {- k, h}) \boldsymbol {w} _ {k, h + 1} +$$ + +which holds for all $s \in S$ , all players $k \in [n]$ , all $a \in \mathcal{A}_k$ , and all timesteps $h \in [H - 1]$ . + +By summing over $a \in \mathcal{A}_k$ while multiplying each term with a corresponding coefficient $\pi_{k,h}(a|s)$ , the display written in an equivalent element-wise vector inequality reads: + +$$ +\boldsymbol {w} _ {k, h} \geq \boldsymbol {r} _ {k, h} (\boldsymbol {\pi} _ {h}) + \mathbb {P} _ {h} (\boldsymbol {\pi} _ {h}) \boldsymbol {w} _ {k, h + 1}. +$$ + +Finally, after consecutively substituting $\pmb{w}_{k,h + 1}$ with the element-wise lesser term $\pmb{r}_{k,h + 1}(\pmb{\pi}_{h + 1}) + \mathbb{P}_{h + 1}(\pmb{\pi}_{h + 1})\pmb{w}_{k,h + 2}$ , we end up with the inequality: + +$$ +\boldsymbol {w} _ {k, 1} \geq \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h}\right). \tag {5} +$$ + +Summing over $k$ , it holds for the $s_1$ -th entry of the inequality, + +$$ +\sum_ {k \in [ n ]} w _ {k, 1} \geq \sum_ {k \in [ n ]} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} (\boldsymbol {\pi} _ {\tau})\right) \boldsymbol {r} _ {k, h} (\boldsymbol {\pi} _ {h}) = 0. +$$ + +Where the equality holds due to the zero-sum property, (1). + +An approximate NE is an approximate global minimum. We show that an $\epsilon$ -approximate NE, $\pi^{\star}$ , achieves an $n\epsilon$ -approximate global minimum of the program. Utilizing Lemma A.1, setting $w_{k}^{\star}(s_{1}) = V_{k,1}^{\dagger,\pi_{-k}^{\star}}(s_{1})$ , and the definition of an $\epsilon$ -approximate NE we see that, + +$$ +\begin{array}{l} \sum_ {k \in [ n ]} \left(w _ {k, 1} ^ {\star} (s _ {1}) - e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau} ^ {\star}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h} ^ {\star}\right)\right) = \sum_ {k \in [ n ]} \left(w _ {k, 1} ^ {\star} (s _ {1}) - V _ {k, 1} ^ {\boldsymbol {\pi} ^ {\star}} (s _ {1})\right) \\ \leq \sum_ {k \in [ n ]} \epsilon = n \epsilon . \\ \end{array} +$$ + +Indeed, this means that $\pi^{\star},\pmb{w}^{\star}$ is an $n\epsilon$ -approximate global minimizer of $(\mathrm{P}_{\mathrm{NE}})$ . + +An approximate global minimum is an approximate NE. For the opposite direction, we let a feasible $\epsilon$ -approximate global minimizer of the program $(\mathrm{P}_{\mathrm{NE}}), (\pi^{\star}, w^{\star})$ . Because a global minimum of the program is equal to 0, an $\epsilon$ -approximate global optimum must be at most $\epsilon > 0$ . We observe that for every $k \in [n]$ , + +$$ +w _ {k, 1} ^ {\star} \left(s _ {1}\right) \geq e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau} ^ {\star}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h} ^ {\star}\right), \tag {6} +$$ + +which follows from induction on the inequality constraint over all $h$ similar to (5). + +Consequently, the assumption that + +$$ +\epsilon \geq \sum_ {k \in [ n ]} \left(w _ {k, 1} ^ {\star} (s _ {1}) - e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau} ^ {\star}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h} ^ {\star}\right)\right), +$$ + +and Equation (6), yields the fact that + +$$ +\begin{array}{l} \epsilon \geq w _ {k, 1} ^ {\star} (s _ {1}) - e _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau} ^ {\star}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h} ^ {\star}\right) \\ \geq V _ {k, 1} ^ {\dagger , \boldsymbol {\pi} _ {- k} ^ {\star}} (s _ {1}) - V _ {k, 1} ^ {\boldsymbol {\pi} ^ {\star}} (s _ {1}), \\ \end{array} +$$ + +where the second inequality holds from the fact that $w^{\star}$ is feasible for $(\mathrm{P}_{\mathrm{BR}})$ . The latter concludes the proof, as the display coincides with the definition of an $\epsilon$ -approximate NE. + +# A.3 Proof of Claim 3.1 + +Proof. The value function of $s_1$ for $h = 1$ of players 1 and 2 read: + +$$ +\begin{array}{l} V _ {1, 1} ^ {\boldsymbol {\sigma}} (s _ {1}) = \boldsymbol {e} _ {s _ {1}} ^ {\top} (\boldsymbol {r} _ {1} (\boldsymbol {\sigma}) + \mathbb {P} (\boldsymbol {\sigma}) \boldsymbol {r} _ {1} (\boldsymbol {\sigma})) \\ = - \frac {9 \sigma \left(a _ {1} , b _ {1} \mid s _ {1}\right)}{2 0} + \frac {\sigma \left(a _ {1} , b _ {2} \mid s _ {1}\right)}{2 0} + \frac {\left(1 - \sigma \left(a _ {1} , b _ {1} \mid s _ {1}\right)\right) \left(\sigma \left(a _ {1} , b _ {1} \mid s _ {2}\right) + \sigma \left(a _ {1} , b _ {2} \mid s _ {2}\right)\right)}{2 0}, \\ \end{array} +$$ + +and, + +$$ +\begin{array}{l} V _ {2, 1} ^ {\boldsymbol {\sigma}} (s _ {1}) = \boldsymbol {e} _ {s _ {1}} ^ {\top} (\boldsymbol {r} _ {2} (\boldsymbol {\sigma}) + \mathbb {P} (\boldsymbol {\sigma}) \boldsymbol {r} _ {2} (\boldsymbol {\sigma})) \\ = - \frac {9 \sigma \left(a _ {1} , b _ {1} \mid s _ {1}\right)}{2 0} + \frac {\sigma \left(a _ {2} , b _ {2} \mid s _ {1}\right)}{2 0} + \frac {\left(1 - \sigma \left(a _ {1} , b _ {1} \mid s _ {1}\right)\right) \left(\sigma \left(a _ {1} , b _ {1} \mid s _ {2}\right) + \sigma \left(a _ {2} , b _ {1} \mid s _ {2}\right)\right)}{2 0}. \\ \end{array} +$$ + +We are indifferent to the corresponding value function of player 3 as they only have one available action per state and hence, cannot affect their rewards. For the joint policy $\sigma$ , the corresponding value functions of both players 1 and 2 are $V_{1,1}^{\sigma}(s_1) = V_{2,1}^{\sigma}(s_1) = \frac{1}{20}$ . + +Deviations. We will now prove that no deviation of player 1 manages to accumulate a reward greater than $\frac{1}{20}$ . The same follows for player 2 due to symmetry. + +When a player deviates unilaterally from a joint policy, they experience a single agent Markov decision process (MDP). It is well-known that MDPs always have a deterministic optimal policy. As such, it suffices to check whether $V_{1,1}^{\pi_1,\sigma^{-1}}(s_1)$ is greater than $\frac{1}{20}$ for any of the four possible deterministic policies: + +$$ +\bullet \pi_ {1} (s _ {1}) = \pi_ {1} (s _ {2}) = (1 \quad 0), +$$ + +$$ +\bullet \pi_ {1} (s _ {1}) = (1 0), \pi_ {1} (s _ {2}) = (0 1), +$$ + +$$ +\bullet \pi_ {1} (s _ {1}) = \pi_ {1} (s _ {2}) = (0 \quad 1), +$$ + +$$ +\bullet \pi_ {1} (s _ {1}) = (0 \quad 1), \pi_ {1} (s _ {2}) = (1 \quad 0). +$$ + +Finally, the value function of any deviation $\pi_1^{\prime}$ writes, + +$$ +V _ {1, 1} ^ {\boldsymbol {\pi} _ {1} ^ {\prime} \times \boldsymbol {\sigma} _ {- 1}} (s _ {1}) = - \frac {\pi_ {1} ^ {\prime} (a _ {1} | s _ {1})}{5} - \frac {\pi_ {1} ^ {\prime} (a _ {1} | s _ {2}) (\pi_ {1} ^ {\prime} (a _ {1} | s _ {1}) - 2)}{4 0}. +$$ + +We can now check that for all deterministic policies $V_{1,1}^{\pi_1' \times \sigma_{-1}}(s_1) \leq \frac{1}{20}$ . By symmetry, it follows that $V_{2,1}^{\pi_2' \times \sigma_{-2}}(s_1) \leq \frac{1}{20}$ and as such $\sigma$ is indeed a CCE. + +# A.4 Proof of Claim 3.2 + +Proof. In general, the value functions of each player 1 and 2 are: + +$$ +V _ {1, 1} ^ {\pi_ {1} \times \pi_ {2}} (s _ {1}) = - \frac {\pi_ {1} (a _ {1} | s _ {1}) \pi_ {2} (b _ {1} | s _ {1})}{2} + \frac {\pi_ {1} (a _ {1} | s _ {1})}{2 0} - \frac {\pi_ {1} (a _ {1} | s _ {2}) (\pi_ {1} (a _ {1} | s _ {1}) \pi_ {2} (b _ {1} | s _ {1}) - 1)}{2 0}, +$$ + +and + +$$ +V _ {2, 1} ^ {\boldsymbol {\pi} _ {1} \times \boldsymbol {\pi} _ {2}} (s _ {1}) = - \frac {\pi_ {1} (a _ {1} | s _ {1}) \pi_ {2} (b _ {1} | s _ {1})}{2} + \frac {\pi_ {1} (b _ {1} | s _ {1})}{2 0} - \frac {\pi_ {1} (b _ {1} | s _ {2}) (\pi_ {1} (a _ {1} | s _ {1}) \pi_ {2} (b _ {1} | s _ {1}) - 1)}{2 0}. +$$ + +Plugging in $\pi_1^\sigma, \pi_2^\sigma$ yields $V_{1,1}^{\pi_1^\sigma \times \pi_2^\sigma}(s_1) = V_{2,1}^{\pi_1^\sigma \times \pi_2^\sigma}(s_1) = -\frac{13}{160}$ . But, if player 1 deviates to say $\pi_1'(s_1) = \pi_1'(s_2) = (0 - 1)$ , they get a value equal to 0 which is clearly greater than $-\frac{13}{160}$ . Hence, $\pi_1^\sigma \times \pi_2^\sigma$ is not a NE. + +# A.5 Proof of Theorem 3.4 + +Proof. The proof follows from the game of Example 1, and Claims 3.1 and 3.2. $\square$ + +# B Proofs for infinite-horizon Zero-Sum Polymatrix Markov Games + +In this section we will explicitly state definitions, theorems and proofs relating to the infinite-horizon discounted zero-sum polymatrix Markov games. + +# B.1 Definitions of equilibria for the infinite-horizon + +Let us restate the definition specifically for infinite-horizon Markov games. They are defined as a tuple $\Gamma(H, S, \{\mathcal{A}_k\}_{k \in [n]}, \mathbb{P}, \{r_k\}_{k \in [n]}, \gamma, \rho)$ . + +- $H = \infty$ denotes the time horizon +- $S$ , with cardinality $S \coloneqq |\mathcal{S}|$ , stands for the state space, +- $\{\mathcal{A}_k\}_{k\in [n]}$ is the collection of every player's action space, while $\mathcal{A} := \mathcal{A}_1\times \dots \times \mathcal{A}_n$ denotes the joint action space; further, an element of that set—a joint action—is generally noted as $\pmb{a} = (a_{1},\ldots ,a_{n})\in \mathcal{A}$ , +- $\mathbb{P}: \mathcal{S} \times \mathcal{A} \to \Delta(\mathcal{S})$ is the transition probability function, +- $r_k: \mathcal{S}, \mathcal{A} \to [-1, 1]$ yields the reward of player $k$ at a given state and joint action, +- a discount factor $0 < \gamma < 1$ , +- an initial state distribution $\rho \in \Delta(S)$ . + +Policies and value functions. In infinite-horizon Markov games policies can still be distinguished in two main ways, Markovian/non-Markovian and stationary/nonstationary. Moreover, a joint policy can be a correlated policy or a product policy. + +Markovian policies attribute a probability over the simplex of actions solely depending on the running state $s$ of the game. On the other hand, non-Markovian policies attribute a probability over the simplex of actions that depends on any subset of the history of the game. I.e., they can depend on any sub-sequence of actions and states up until the running timestep of the horizon. + +Stationary policies are those that will attribute the same probability distribution over the simplex of actions for every timestep of the horizon. Nonstationary policies, on the contrary can change depending on the timestep of the horizon. + +A joint Markovian stationary policy $\sigma$ is said to be correlated when for every state $s\in S$ , attributes a probability distribution over the simplex of joint actions $\mathcal{A}$ for all players, i.e., $\sigma (s)\in \Delta (\mathcal{A})$ . A Markovian stationary policy $\pi$ is said to be a product policy when for every $s\in S$ , $\pi (s)\in \prod_{k = 1}^{n}\Delta (\mathcal{A}_k)$ . It is rather easy to define correlated/product policies for the case of non-Markovian and nonstationary policies. + +Given a Markovian stationary policy $\pi$ , the value function for an infinite-horizon discounted game is defined as, + +$$ +V _ {k} ^ {\boldsymbol {\pi}} \left(s _ {1}\right) = \mathbb {E} _ {\boldsymbol {\pi}} \left[ \sum_ {h = 1} ^ {H} \gamma^ {h - 1} r _ {k, h} \left(s _ {h}, \boldsymbol {a} _ {h}\right) \mid s _ {1} \right] = \boldsymbol {e} _ {s _ {1}} ^ {\top} \sum_ {h = 1} ^ {H} \left(\gamma^ {h - 1} \prod_ {\tau = 1} ^ {h} \mathbb {P} _ {\tau} \left(\boldsymbol {\pi} _ {\tau}\right)\right) \boldsymbol {r} _ {k, h} \left(\boldsymbol {\pi} _ {h}\right). +$$ + +It is possible to express the value function of each player $k$ in the following way, + +$$ +V _ {k} ^ {\boldsymbol {\pi}} (s _ {1}) = \boldsymbol {e} _ {s _ {1}} ^ {\top} (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} (\boldsymbol {\pi}). +$$ + +Where $\mathbf{I}$ is the identity matrix of appropriate dimensions. Also, when the initial state is drawn from the initial state distribution, we denote, the value function reads $V_{k}^{\pi}(\pmb {\rho}) = \pmb{\rho}^{\top}(\mathbf{I} - \gamma \mathbb{P}(\pmb {\pi}))^{-1}\pmb {r}(\pmb {\pi})$ + +Best-response policies. Given an arbitrary joint policy $\sigma$ (which can be either a correlated or product policy), a best-response policy of a player $k$ is defined to be $\pi_k^\dagger \in \Delta (\mathcal{A}_k)^S$ such that $\pi_k^\dagger \in \arg \max_{\pi_k'}V_k^{\pi_k'\times \sigma_{-k}}(s)$ . Also, we will denote $V_{k}^{\dagger,\sigma_{-k}}(s) = \max_{\pi_k'}V_k^{\pi_k',\sigma_{-k}}(s)$ . It is rather straightforward to see that the problem of computing a best-response to a given policy is equivalent to solving a single-agent MDP problem. + +Notions of equilibria. Now that best-response policies have been defined, it is straightforward to define the different notions of equilibria. First, we define the notion of a coarse-correlated equilibrium. + +Definition B.1 (CCE—infinite-horizon). A joint (potentially correlated) policy $\sigma \in \Delta (\mathcal{A})^S$ is an $\epsilon$ -approximate coarse-correlated equilibrium if it holds that for an $\epsilon$ , + +$$ +V _ {k} ^ {\dagger , \sigma_ {- k}} (\boldsymbol {\rho}) - V _ {k} ^ {\sigma} (\boldsymbol {\rho}) \leq \epsilon , \forall k \in [ n ]. +$$ + +Second, we define the notion of a Nash equilibrium. The main difference of the definition of the coarse-correlated equilibrium, is the fact that a NE Markovian stationary policy is a product policy. + +Definition B.2 (NE—infinite-horizon). A joint (potentially correlated) policy $\pi \in \prod_{k\in [n]}\Delta (\mathcal{A}_k)^S$ is an $\epsilon$ -approximate coarse-correlated equilibrium if it holds that for an $\epsilon$ , + +$$ +V _ {k} ^ {\dagger , \pi - k} (\boldsymbol {\rho}) - V _ {k} ^ {\pi} (\boldsymbol {\rho}) \leq \epsilon , \forall k \in [ n ]. +$$ + +As it is folklore by now, infinite-horizon discounted Markov games have a stationary Markovian Nash equilibrium. + +# C Main results for infinite-horizon games + +The workhorse of our arguments in the following results is still the following nonlinear program with variables $\pi, w$ , + +$$ +\begin{array}{l} \min \sum_ {k \in [ n ]} \boldsymbol {\rho} ^ {\top} \left(\boldsymbol {w} _ {k} - (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} _ {k} (\boldsymbol {\pi})\right) \\ \begin{array}{c} \text {s . t .} w _ {k} (s) \geq r _ {k} (s, a, \boldsymbol {\pi} _ {- k}) + \gamma \mathbb {P} (s, a, \boldsymbol {\pi} _ {- k}) \boldsymbol {w} _ {k}, \\ \forall s \in \mathcal {S}, \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}; \end{array} \\ \pi_ {k} (s) \in \Delta (\mathcal {A} _ {k}), \\ \forall s \in \mathcal {S}, \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}. \\ \end{array} +$$ + +As we will prove, approximate NE's correspond to approximate global minima of $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ and vice-versa. Before that, we need some intermediate lemmas. The first lemma we prove is about the best-response program. + +The best-response program. Even for the infinite-horizon, we can define a linear program for the best-responses of all players. That program is the following, with variables $\boldsymbol{w}$ , + +$$ +\min \sum_ {k \in [ n ]} \boldsymbol {\rho} ^ {\top} \left(\boldsymbol {w} _ {k} - (\mathbf {I} - \gamma \mathbb {P} (\hat {\boldsymbol {\sigma}})) ^ {- 1} \boldsymbol {r} _ {k} (\hat {\boldsymbol {\sigma}})\right) +$$ + +$$ +\begin{array}{c} (\mathrm {P} _ {\mathrm {B R}} ^ {\prime}) \hskip 1 4. 2 2 6 3 7 8 p t \text {s . t .} w _ {k} (s) \geq r _ {k} (s, a, \hat {\boldsymbol {\sigma}} _ {- k}) + \mathbb {P} (s, a, \hat {\boldsymbol {\sigma}} _ {- k}) \boldsymbol {w} _ {k}, \\ \forall s \in \mathcal {S}, \forall k \in [ n ], \forall a \in \mathcal {A} _ {k}. \end{array} +$$ + +Lemma C.1 (Best-response LP—infinite-horizon). Let a (possibly correlated) joint policy $\hat{\sigma}$ . Consider the linear program $(\mathrm{P}_{\mathrm{BR}}^{\prime})$ . The optimal solution $\boldsymbol{w}^{\dagger}$ of the program is unique and corresponds to the value function of each player $k \in [n]$ when player $k$ best-responds to $\hat{\sigma}$ . + +Proof. We observe that the program is separable to $n$ independent linear programs, each with variables $\boldsymbol{w}_k \in \mathbb{R}^n$ , + +$$ +\min \rho^ {\top} \boldsymbol {w} _ {k} +$$ + +$$ +\mathrm {s . t .} w _ {k} (s) \geq r _ {k} (s, a, \hat {\sigma} _ {- k}) + \gamma \mathbb {P} (s, a, \hat {\sigma} _ {- k}) \boldsymbol {w} _ {k}, +$$ + +$$ +\forall s \in \mathcal {S}, \forall a \in \mathcal {A} _ {k}. +$$ + +Each of these linear programs describes the problem of a single agent MDP—that agent being $k$ . It follows that the optimal $\boldsymbol{w}_k^\dagger$ for every program is unique (each program corresponds to a set of Bellman optimality equations). + +Properties of the NE program. Second, we need to prove that the minimum value of the objective function of the program is nonnegative. + +Lemma C.2 (Feasibility of $\mathrm{(P_{NE}^{\prime})}$ and global optimum). The nonlinear program $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ is feasible, has a nonnegative objective value, and its global minimum is equal to 0. + +Proof. For the feasibility of the nonlinear program, we invoke the theorem of the existence of a Nash equilibrium. i.e., let a NE product policy, $\pi^{\star}$ , and a vector $\pmb{w}^{\star} \in \mathbb{R}^{n \times H \times S}$ such that $w_{k,s}^{\star}(s) = V_k^{\dagger, \pmb{\pi}_{-k}^{\star}}(s)$ , $\forall k \in [n] \times S$ . + +By Lemma C.1, we know that $(\pmb{\pi}^{\star}, \pmb{w}^{\star})$ satisfies all the constraints of $(\mathrm{P}_{\mathrm{NE}}')$ . Additionally, because $\pmb{\pi}^{\star}$ is a NE, $V_{k}^{\pmb{\pi}^{\star}}(\pmb{\rho}) = V_{k}^{\dagger, \pmb{\pi}_{-k}^{\star}}(\pmb{\rho})$ for all $k \in [n]$ . Observing that, + +$$ +\boldsymbol {\rho} ^ {\top} \left(\boldsymbol {w} _ {k} ^ {\star} - (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi} ^ {\star})) ^ {- 1} \boldsymbol {r} _ {k} (\boldsymbol {\pi} ^ {\star})\right) = V _ {k} ^ {\dagger , \boldsymbol {\pi} _ {- k} ^ {\star}} (\boldsymbol {\rho}) - V _ {k} ^ {\boldsymbol {\pi} ^ {\star}} (\boldsymbol {\rho}) = 0, +$$ + +concludes the argument that a NE attains an objective value equal to 0. + +Continuing, we observe that due to (1) the objective function can be equivalently rewritten as, + +$$ +\begin{array}{l} \sum_ {k \in [ n ]} \left(\boldsymbol {\rho} ^ {\top} \boldsymbol {w} _ {k} - \boldsymbol {\rho} ^ {\top} (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} _ {k} (\boldsymbol {\pi})\right) \\ = \sum_ {k \in [ n ]} \boldsymbol {\rho} ^ {\top} \boldsymbol {w} _ {k} - \boldsymbol {\rho} ^ {\top} (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \sum_ {k \in [ n ]} \boldsymbol {r} _ {k} (\boldsymbol {\pi} _ {h}) \\ = \sum_ {k \in [ n ]} \boldsymbol {\rho} ^ {\top} \boldsymbol {w} _ {k}. \\ \end{array} +$$ + +Next, we focus on the inequality constraint + +$$ +w _ {k} (s) \geq r _ {k} (s, a, \boldsymbol {\pi} _ {- k}) + \gamma \mathbb {P} (s, a, \boldsymbol {\pi} _ {- k}) \boldsymbol {w} _ {k} +$$ + +which holds for all $s \in S$ , all players $k \in [n]$ , and all $a \in \mathcal{A}_k$ . + +By summing over $a \in \mathcal{A}_k$ while multiplying each term with a corresponding coefficient $\pi_k(a|s)$ , the display written in an equivalent element-wise vector inequality reads: + +$$ +\boldsymbol {w} _ {k} \geq \boldsymbol {r} _ {k, h} (\boldsymbol {\pi}) + \gamma \mathbb {P} (\boldsymbol {\pi}) \boldsymbol {w} _ {k}. +$$ + +Finally, after consecutively substituting $\pmb{w}_k$ with the element-wise lesser term $\pmb{r}_k(\pmb{\pi}) + \gamma \mathbb{P}(\pmb{\pi})\pmb{w}_k$ , we end up with the inequality: + +$$ +\boldsymbol {w} _ {k} \geq (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} _ {k} (\boldsymbol {\pi}). \tag {9} +$$ + +We note that $\mathbf{I} + \gamma \mathbb{P}(\pmb {\pi}) + \gamma^2\mathbb{P}^2 (\pmb {\pi}) + \dots = (\mathbf{I} - \gamma \mathbb{P}(\pmb {\pi}))^{-1}$ + +Summing over $k$ , it holds for the $s_1$ -th entry of the inequality, + +$$ +\sum_ {k \in [ n ]} \boldsymbol {w} _ {k} \geq \sum_ {k \in [ n ]} (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \boldsymbol {r} _ {k} (\boldsymbol {\pi}) = (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi})) ^ {- 1} \sum_ {k \in [ n ]} \boldsymbol {r} _ {k} (\boldsymbol {\pi}) = 0. +$$ + +Where the equality holds due to the zero-sum property, (1). + +![](images/bb2147024560cff1e8f8aeaf9b62ebbfac92e14f1c1a9edf6c5e301f1114638d.jpg) + +Theorem C.1 (NE and global optima of $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ infinite-horizon). If $(\pi^{\star},\pmb{w}^{\star})$ yields an $\epsilon$ -approximate global minimum of $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ , then $\pmb{\pi}^{\star}$ is an $n\epsilon$ -approximate NE of the infinite-horizon zero-sum polymatrix switching controller $MG, \Gamma$ . Conversely, if $\pmb{\pi}^{\star}$ is an $\epsilon$ -approximate NE of the $MG\Gamma$ with corresponding value function vector $\pmb{w}^{\star}$ such that $w_{k}^{\star}(s) = V_{k}^{\pi^{\star}}(s)\forall (k,s)\in [n]\times S$ , then $(\pi^{\star},\pmb{w}^{\star})$ attains an $\epsilon$ -approximate global minimum of $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ . + +# Proof. + +An approximate NE is an approximate global minimum. We show that an $\epsilon$ -approximate NE, $\pi^{\star}$ , achieves an $n\epsilon$ -approximate global minimum of the program. Utilizing Lemma C.1 by setting $\boldsymbol{w}_k^{\star} = \mathbf{V}^{\dagger,\pi_{-k}^{\star}}(\rho)$ , feasibility, and the definition of an $\epsilon$ -approximate NE we see that, + +$$ +\begin{array}{l} \sum_ {k \in [ n ]} \left(\boldsymbol {\rho} ^ {\top} \boldsymbol {w} _ {k} ^ {\star} - \boldsymbol {\rho} ^ {\top} \left(\mathbf {I} - \gamma \mathbb {P} \left(\boldsymbol {\pi} ^ {\star}\right)\right) ^ {- 1} \boldsymbol {r} _ {k} \left(\boldsymbol {\pi} ^ {\star}\right)\right) = \sum_ {k \in [ n ]} \left(\boldsymbol {\rho} ^ {\top} \boldsymbol {w} _ {k} ^ {\star} - V _ {k} ^ {\boldsymbol {\pi} ^ {\star}} (\boldsymbol {\rho})\right) \\ \leq \sum_ {k \in [ n ]} \epsilon = n \epsilon . \\ \end{array} +$$ + +Indeed, this means that $\pi^{\star},\pmb{w}^{\star}$ is an $n\epsilon$ -approximate global minimizer of $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ + +An approximate global minimum is an approximate NE. For this direction, we let a feasible $\epsilon$ -approximate global minimizer of the program $(\mathrm{P}_{\mathrm{NE}}^{\prime})$ , $(\pi^{\star}, w^{\star})$ . Because a global minimum of the program is equal to 0, an $\epsilon$ -approximate global optimum must be at most $\epsilon > 0$ . We observe that for every $k \in [n]$ , + +$$ +\boldsymbol {\rho} ^ {\top} \boldsymbol {w} _ {k} ^ {\star} \geq \boldsymbol {\rho} ^ {\top} \left(\mathbf {I} - \gamma \mathbb {P} \left(\boldsymbol {\pi} ^ {\star}\right)\right) ^ {- 1} \boldsymbol {r} _ {k} \left(\boldsymbol {\pi} ^ {\star}\right), \tag {10} +$$ + +which follows from induction on the inequality constraint (9). + +Consequently, the assumption that + +$$ +\epsilon \geq \rho^ {\top} \boldsymbol {w} _ {k} ^ {\star} - \rho^ {\top} \left(\mathbf {I} - \gamma \mathbb {P} \left(\boldsymbol {\pi} ^ {\star}\right)\right) ^ {- 1} \boldsymbol {r} _ {k} \left(\boldsymbol {\pi} ^ {\star}\right) +$$ + +and Equation (10), yields the fact that + +$$ +\begin{array}{l} \epsilon \geq \rho^ {\top} \boldsymbol {w} _ {k} ^ {\star} - \rho^ {\top} \left(\mathbf {I} - \gamma \mathbb {P} \left(\boldsymbol {\pi} ^ {\star}\right)\right) ^ {- 1} \boldsymbol {r} _ {k} \left(\boldsymbol {\pi} ^ {\star}\right) \\ \geq V _ {k} ^ {\dagger , \boldsymbol {\pi} _ {- k} ^ {\star}} (\boldsymbol {\rho}) - V _ {k} ^ {\boldsymbol {\pi} ^ {\star}} (\boldsymbol {\rho}), \\ \end{array} +$$ + +where the second inequality holds from the fact that $\pmb{w}^{\star}$ is also feasible for $(\mathrm{P}_{\mathrm{BR}}^{\prime})$ . The latter concludes the proof, as the display coincides with the definition of an $\epsilon$ -approximate NE. + +Theorem C.2 (CCE collapse to NE in polymatrix MG—infinite-horizon). Let a zero-sum polymatrix switching-control Markov game, i.e., a Markov game for which Assumptions 1 and 2 hold. Further, let an $\epsilon$ -approximate CCE of that game $\sigma$ . Then, the marginal product policy $\pi^{\sigma}$ , with $\pi_k^\sigma(a|s) = \sum_{a_{-k} \in \mathcal{A}_{-k}} \sigma(a, a_{-k})$ , $\forall k \in [n]$ is an $n\epsilon$ -approximate NE. + +Proof. Let an $\epsilon$ -approximate CCE policy, $\sigma$ , of game $\Gamma$ . Moreover, let the best-response value-vectors of each agent $k$ to joint policy $\sigma_{-k}$ , $\boldsymbol{w}_k^\dagger$ . + +Now, we observe that due to Assumption 1, + +$$ +\begin{array}{l} w _ {k} ^ {\dagger} (s) \geq r _ {k} (s, a, \boldsymbol {\sigma} _ {- k}) + \gamma \mathbb {P} _ {h} (s, a, \boldsymbol {\sigma} _ {- k}) \boldsymbol {w} _ {k} ^ {\dagger} \\ = \sum_ {j \in \operatorname {a d j} (k)} r _ {(k, j), h} (s, a, \boldsymbol {\pi} _ {j} ^ {\sigma}) + \gamma \mathbb {P} (s, a, \boldsymbol {\sigma} _ {- k}) \boldsymbol {w} _ {k} ^ {\dagger}. \\ \end{array} +$$ + +Further, due to Assumption 2, + +$$ +\mathbb {P} (s, a, \boldsymbol {\sigma} _ {- k}) \boldsymbol {w} _ {k} ^ {\dagger} = \mathbb {P} (s, a, \boldsymbol {\pi} _ {\operatorname {a r g c t r l} (s)} ^ {\boldsymbol {\sigma}}) \boldsymbol {w} _ {k} ^ {\dagger}, +$$ + +or, + +$$ +\mathbb {P} (s, a, \boldsymbol {\sigma} _ {- k}) \boldsymbol {w} _ {k} ^ {\dagger} = \mathbb {P} (s, a, \boldsymbol {\pi} ^ {\boldsymbol {\sigma}}) \boldsymbol {w} _ {k} ^ {\dagger}. +$$ + +Putting these pieces together, we reach the conclusion that $(\pi^{\sigma},\pmb{w}^{\dagger})$ is feasible for the nonlinear program $\mathrm{(P_{NE}^{\prime})}$ + +What is left is to prove that it is also an $\epsilon$ -approximate global minimum. Indeed, if $\sum_{k} \rho^{\top} \boldsymbol{w}_{k}^{\dagger} \leq \epsilon$ (by assumption of an $\epsilon$ -approximate CCE), then the objective function of $(\mathrm{P}_{\mathrm{NE}}')$ will attain an $\epsilon$ -approximate global minimum. In turn, due to Theorem C.1 the latter implies that $\pi^{\sigma}$ is an $n\epsilon$ -approximate NE. + +# C.1 No equilibrium collapse with more than one controllers per-state + +Example 2. We consider the following 3-player Markov game that takes place for a time horizon $H = 3$ . There exist three states, $s_1, s_2$ , and $s_3$ and the game starts at state $s_1$ . Player 3 has a single action in every state, while players 1 and 2 have two available actions $\{a_1, a_2\}$ and $\{b_1, b_2\}$ respectively in every state. The initial state distribution $\pmb{\rho}$ is the uniform probability distribution over $S$ . + +Reward functions. If player 1 (respectively, player 2) takes action $a_1$ (resp., $b_1$ ), in either of the states $s_1$ or $s_2$ , they get a reward equal to $\frac{1}{20}$ . In state $s_3$ , both players get a reward equal to $-\frac{1}{2}$ regardless of the action they select. Player 3 always gets a reward that is equal to the negative sum of the reward of the other two players. This way, the zero-sum polymatrix property of the game is ensured (Assumption 1). + +Transition probabilities. If players 1 and 2 select the joint action $(a_{1}, b_{1})$ in state $s_{1}$ , the game will transition to state $s_{2}$ . In any other case, it will transition to state $s_{3}$ . The converse happens if in state $s_{2}$ they take joint action $(a_{1}, b_{1})$ ; the game will transition to state $s_{3}$ . For any other joint action, it will transition to state $s_{1}$ . From state $s_{3}$ , the game transition to state $s_{1}$ or $s_{2}$ uniformly at random. + +At this point, it is important to notice that two players control the transition probability from one state to another. In other words, Assumption 2 does not hold. + +![](images/c49f6f0360c2da66127fc3886531949574b4c1eeb85ab2010bb7376faa9ce5dc.jpg) +Figure 2: A graph of the state space with transition probabilities parametrized with respect to the policy of each player. + +Next, we consider the joint policy $\pmb{\sigma}$ + +$$ +\boldsymbol {\sigma} (s _ {1}) = \boldsymbol {\sigma} (s _ {2}) = \begin{array}{c c c} & a _ {1} & b _ {1} \\ & a _ {2} & 0 \\ & & 1 / 2 \end{array} \left( \begin{array}{c c} 0 & 1 / 2 \\ 1 / 2 & 0 \end{array} \right). +$$ + +Claim C.1. The joint policy $\sigma$ that assigns probability $\frac{1}{2}$ to the joint actions $(a_{1},b_{2})$ and $(a_{2},b_{1})$ in both states $s_1,s_2$ is a CCE and $V_{1}^{\sigma}(\pmb {\rho}) = V_{2}^{\sigma}(\pmb {\rho}) = -\frac{1}{10}$ . + +Proof. + +$$ +\begin{array}{l} V _ {1} ^ {\sigma} (\boldsymbol {\rho}) = \boldsymbol {\rho} ^ {\top} \left(\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\sigma})\right) ^ {- 1} \boldsymbol {r} _ {1} (\boldsymbol {\sigma}) \\ = \left( \begin{array}{c c c} \frac {1}{3} & \frac {1}{3} & \frac {1}{3} \end{array} \right) \left( \begin{array}{c c c} \frac {9}{5} & \frac {6}{5} & 0 \\ \frac {6}{5} & \frac {9}{5} & 0 \\ 1 & 1 & 1 \end{array} \right) \left( \begin{array}{c} \frac {1}{4 0} \\ \frac {1}{4 0} \\ - \frac {1}{2} \end{array} \right) \\ = - \frac {1}{1 0}. \\ \end{array} +$$ + +We check every deviation, + +$$ +\begin{array}{l} \bullet \pi_ {1} (s _ {1}) = \pi_ {1} (s _ {2}) = (1 0), V ^ {\pi_ {1} \times \sigma_ {- 1}} (\boldsymbol {\rho}) = - \frac {2}{5}, \\ \bullet \pi_ {1} (s _ {1}) = \pi_ {1} (s _ {2}) = (0 \quad 1), V ^ {\pi_ {1} \times \sigma_ {- 1}} (\boldsymbol {\rho}) = - \frac {1}{6}, \\ \bullet \pi_ {1} (s _ {1}) = (1 0), \pi_ {1} (s _ {2}) = (0 1), V ^ {\pi_ {1} \times \sigma_ {- 1}} (\rho) = - \frac {5}{1 6}, \\ \bullet \pi_ {1} (s _ {1}) = (0 1), \pi_ {1} (s _ {2}) = (1 0), V ^ {\pi_ {1} \times \sigma_ {- 1}} (\rho) = - \frac {5}{1 6}. \\ \end{array} +$$ + +For every such deviation the value of player 1 is smaller than $-\frac{1}{10}$ . For player 2, the same follows by symmetry. Hence, $\sigma$ is indeed a CCE. + +![](images/1d93451b359da014b02d932288f3d4d2e62c928232c0e18e9afab70fe383ecfd.jpg) + +Yet, the marginalized product policy of $\sigma$ which we note as $\pi_1^\sigma \times \pi_2^\sigma$ does not constitute a NE. The components of this policy are, + +$$ +\left\{ \begin{array}{l} \pi_ {1} ^ {\sigma} (s _ {1}) = \pi_ {1} ^ {\sigma} (s _ {2}) = \left( \begin{array}{c c} a _ {1} & a _ {2} \\ 1 / 2 & 1 / 2 \end{array} \right), \\ \pi_ {2} ^ {\sigma} (s _ {1}) = \pi_ {2} ^ {\sigma} (s _ {2}) = \left( \begin{array}{c c} b _ {1} & b _ {2} \\ 1 / 2 & 1 / 2 \end{array} \right). \end{array} \right. +$$ + +I.e., the product policy $\pi_1^\sigma \times \pi_2^\sigma$ selects any of the two actions of each player in states $s_1, s_2$ independently and uniformly at random. With the following claim, it can be concluded that in general when more than one player controls the transition the set of equilibria do not collapse. + +Claim C.2. The product policy $\pi_1^\sigma \times \pi_2^\sigma$ is not a NE. + +Proof. For $\pi^{\sigma} = \pi_1^{\sigma}\times \pi_2^{\sigma}$ we get, + +$$ +\begin{array}{l} V _ {1} ^ {\boldsymbol {\pi} ^ {\sigma}} = \rho^ {\top} (\mathbf {I} - \gamma \mathbb {P} (\boldsymbol {\pi} ^ {\sigma})) ^ {- 1} \boldsymbol {r} _ {1} (\boldsymbol {\pi} ^ {\sigma}) \\ = \left( \begin{array}{c c c} \frac {1}{3} & \frac {1}{3} & \frac {1}{3} \end{array} \right) \left( \begin{array}{c c c} \frac {3 4}{2 1} & \frac {2 0}{2 1} & \frac {3}{7} \\ \frac {2 0}{2 1} & \frac {3 4}{2 1} & \frac {3}{7} \\ \frac {6}{7} & \frac {6}{7} & \frac {9}{7} \end{array} \right) \left( \begin{array}{c} \frac {1}{4 0} \\ \frac {1}{4 0} \\ - \frac {1}{2} \end{array} \right) \\ = - \frac {3}{1 0}. \\ \end{array} +$$ + +But, for the deviation $\pi_1(a_1|s_1) = \pi_1(a_1|s_2) = 0$ , the value functor of player 1, is equal to $-\frac{1}{6}$ . Hence, $\pi^{\sigma}$ is not a NE. + +In conclusion, Assumption 1 does not suffice to ensure equilibrium collapse. + +Theorem C.3 (No collapse—infinite-horizon). There exists a zero-sum polymatrix Markov game (Assumption 2 is not satisfied) that has a CCE which does not collapse to a NE. + +Proof. The proof follows from the game of Example 2, and Claims C.1 and C.2. $\square$ + +![](images/e3f69ffeb9b05b09c66107e9810b4f07982a6edda5dadcd6fa0b108f7ceb4478.jpg) + +# D An algorithm for approximating Markovian CCE + +In this section we describe the algorithm in (Daskalakis et al., 2022) used for the computation of a Markovian CCE. We note that $M, N_{\mathrm{visit}}, p$ are parameters that affect the accuracy of the approximation and we kindly ask the reader to refer to (Daskalakis et al., 2022) for further details. This algorithm computes an $\epsilon$ -approximate CCE in time $\tilde{O}(\epsilon^{-3})$ . Newer works have improved the dependence on $\epsilon$ to $\tilde{O}(\epsilon^{-2})$ , see (Wang et al., 2023; Cui et al., 2023). + +As soon as an $\epsilon$ -approximate CCE is computed, what is left to do is to compute the marginal policy of every player, $\pi_{k,h}^{\sigma}(a|s) = \sum_{\mathbf{a}_{-k}\in \mathcal{A}_{-k}}\sigma (a,\mathbf{a}_{-k}|s)$ . + +Algorithm 1 SPoCMAR (Daskalakis et al., 2022) +1: procedure SPOCMAR(n, S, A, H, M, Nvisit, p) +2: Set V = ∅. +3: For each h ∈ [H], s ∈ S, set σqcover =⊥. +4: for q ≥ 1 and while τ = 0 do +5: Set τ = 1 #Terminate flag. +6: Set Πh^q := {σh,s^cover : s ∈ S} for each h ∈ [H]. +7: for h = H, H-1, ..., 1 do +8: Set k = 0, and V^q_{i,H+1}(s) = 0 for all s ∈ S and i ∈ [m]. +9: Each player i initializes an adversarial bandit for all (s,h) ∈ S × [H]. +10: for each σ ∈ Πh^q ∪ σU do # σU chooses actions uniformly at random +11: for a total of M times do +12: k ← k + 1. +13: Let σ be the policy which follows σ for the first h - 1 steps and plays according to the bandit algorithm for the state visited at step h (and acts arbitrarily for steps h' > h). +14: Draw a joint trajectory (s1,m, a1,m, r1,m, ..., sH,m, aH,m, rH,m) from σ. +15: if (h, sh,m) ∈ V then +16: Each i updates its bandit alg. at (h, sh,m) with (ai,h,m, H-r_i,h,m-V^q_{i,h+1}(sh+1,m)/H). +17: else +18: Each i updates its bandit alg. at (h, sh,m) with (ai,h,m, H-(H+1-h)/H). +19: end if +20: end for +21: end for +22: For each s ∈ S, and j ≥ 1, let m_j,h,s ∈ [M+1] denote the jth smallest value of k so that sh,m = s, or M + 1 if such a jth smallest value does not exist. +23: For each s ∈ S, let J_h,s denote the largest integer j so that k_j,h,s ≤ M. +24: Define σ^q ∈ Δ(A)^S to be the 1-step policy: σ^q(h|s) = 1/∑j=1J_h,s[H] [a = ah,m_j,h,s]. +25: Set V^q_{i,h}(s) := { \begin{cases} \frac{1}{J_h,s} \sum_{j=1}^{J_h,s} \left( r_{i,h,k_j,h,s} + \bar{V}_{i,h+1}^q (sh+1,k_j,h,s) \right) & : (h,s) \in \mathcal{V} \\ (H+1-h) & : (h,s) \notin \mathcal{V}. \end{cases} } +26: end for +27: Define the joint policy σ^q, which follows σ^q_{h'} at each step h' ∈ [H]. +28: Call EstVisitation(σ^q, Nvisit) (Alg. 2) to obtain estimates d^q_{h'} ∈ Δ(S) for each h' ∈ [H]. +29: for each s ∈ S and h' ∈ [H] do +30: if d^q_{h'}(s) ≥ p and (h', s)∉ V then +31: Set σ^cover_{h',s} ← σ^q. +32: Add (h', s) to V. +33: Set τ ← 0. +34: end if +35: end for +36: end for +37: return the policy σ := σ^q. +38: end procedure + +Algorithm 2 EstVisitation +1: procedure ESTVISITATION $(\sigma ,N)$ +2: for $1\leq n\leq N$ do +3: Draw a trajectory from $\pmb{\sigma}$ , and let $(s_1^n,\dots,s_H^n)$ denote the sequence of states observed. +4: end for +5: for $h\in [H]$ do +6: Let $\hat{d}_h\in \Delta (\mathcal{S})$ denote the empirical distribution over $(s_h^1,\ldots ,s_h^N)$ +7: end for +8: return $(\hat{d}_1,\dots,\hat{d}_H)$ +9: end procedure \ No newline at end of file diff --git a/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/images.zip b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6b8028010fdcdf62d0bcec9f358d7efab92260c0 --- /dev/null +++ b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f748734672a9a4031aab46629565ef6e1c7a46198a9cbe9237fa30a3e57ed6bb +size 702721 diff --git a/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/layout.json b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..050afd1396808b8e91f187e94a1e64e297ee7f92 --- /dev/null +++ b/zerosumpolymatrixmarkovgamesequilibriumcollapseandefficientcomputationofnashequilibria/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:352ee45f1ec549a3a3828b4e43844c66ea09f9e6656db0bd2b78f93c1df99a0c +size 1079405 diff --git a/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_content_list.json b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5e0aa04df4fd92a9112ac2151ec4572c61634c3d --- /dev/null +++ b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c37167e8bd1f4a5bc229614b9b0d4e025181652f2e5b57ee68437cd3acf7052f +size 91238 diff --git a/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_model.json b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..71ef26dd1538f1dae0f8e7ca5dd7d585dbf1c8ac --- /dev/null +++ b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ea1fb4398dc3efa8162e47dee675af2a10ff57e32c53716cb2d5d3b9ddf7cf8 +size 117801 diff --git a/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_origin.pdf b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ea85be4f03a28e4bc3fa88a822ffe9632d2a43da --- /dev/null +++ b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/86eb1a53-cb1e-4ab9-a2db-dca6852c0e7a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f8b1328a800ecfcd1b315584fa4519117a2da80c0227904d89457b6be8c7b2e +size 535653 diff --git a/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/full.md b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2d959534c8db840e26c7b6337eb909b4bbfa4d29 --- /dev/null +++ b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/full.md @@ -0,0 +1,395 @@ +# Zeroth-Order Methods for Nondifferentiable, Nonconvex, and Hierarchical Federated Optimization + +Yuyang Qiu +Dept. of Industrial and Systems Engg. +Rutgers University +yuyang.qiu@rutgers.edu + +Uday V. Shanbhag +Dept. of Industrial and Manufacturing Engg. +Pennsylvania State University +udaybag@psu.edu + +Farzad Yousefian +Dept. of Industrial and Systems Engg. +Rutgers University +farzad.yousefian@rutgers.edu + +# Abstract + +Federated learning (FL) has emerged as an enabling framework for communication-efficient decentralized training. We study three broadly applicable problem classes in FL: (i) Nondifferentiable nonconvex federated optimization; (ii) Federated bilevel optimization; (iii) Federated minimax problems. Notably, in an implicit sense, both (ii) and (iii) are instances of (i). However, the hierarchical problems in (ii) and (iii) are often complicated by the absence of a closed-form expression for the implicit objective function. Unfortunately, research on these problems has been limited and afflicted by reliance on strong assumptions, including the need for differentiability and L-smoothness of the implicit function. We address this shortcoming by making the following contributions. In (i), by leveraging convolution-based smoothing and Clarke's subdifferential calculus, we devise a randomized smoothing-enabled zeroth-order FL method and derive communication and iteration complexity guarantees for computing an approximate Clarke stationary point. To contend with (ii) and (iii), we devise a unified randomized implicit zeroth-order FL framework, equipped with explicit communication and iteration complexities. Importantly, our method utilizes delays during local steps to skip making calls to the inexact lower-level FL oracle. This results in significant reduction in communication overhead when addressing hierarchical problems. We empirically validate the theory on nonsmooth and hierarchical ML problems. + +# 1 Introduction + +Federated learning (FL) has recently emerged as a promising enabling framework for learning predictive models from a multitude of distributed, privacy-sensitive, and possibly, heterogeneous datasets. This is accomplished through the use of efficiently devised periodic communications between a central server and a collection of clients. The FL algorithmic framework allows for addressing several key obstacles in the development and implementation of standard machine learning methods in a distributed and parallel manner. For instance, the conventional parallel stochastic gradient descent (SGD) method requires the exchange of information among the computing nodes at every single time step, resulting in excessive communication overhead. In contrast, FL methods including FedAvg [34] and Local SGD [46] overcome this onerous communication bottleneck by provably attaining the linear speedup of parallel SGD by using a significantly fewer communication rounds [18, 54, 24, 7]. These guarantees have been further complemented by recent efforts [26, + +38] where the presence of both data heterogeneity (i.e., variability of local datasets) and device heterogeneity (i.e., variability of edge devices in computational power, memory, and bandwidth) have been addressed. Despite recent advances, much needs to be understood about designing communication-efficient decentralized methods for resolving three broadly applicable problem classes, each of which is presented next. + +(a) Nondifferentiable nonconvex locally constrained FL. Consider the prototypical FL setting: + +$$ +\min _ {x} \left\{f (x) \triangleq \frac {1}{m} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\xi_ {i} \in \mathcal {D} _ {i}} [ \tilde {f} _ {i} (x, \xi_ {i}) ] | x \in X \triangleq \bigcap_ {i = 1} ^ {m} X _ {i} \right\}, \quad (\mathbf {F L} _ {n n}) +$$ + +where $f$ is a nonsmooth nonconvex function and is associated with a group of $m$ clients indexed by $i \in [m] \triangleq \{1, \ldots, m\}$ , $\mathcal{D}_i$ denotes the local dataset, $\tilde{f}_i : \mathbb{R}^n \times \mathcal{D}_i \to \mathbb{R}$ is the local loss function, and $X_i \subseteq \mathbb{R}^n$ is an easy-to-project local constraint set. Notably, local datasets may vary across clients, allowing for data heterogeneity. We also consider client-specific local sets to induce personalization. + +(b) Nondifferentiable nonconvex bilevel $FL$ . Overlaying a bilevel term in $(\mathbf{FL}_{nn})$ leads to + +$$ +\min _ {x \in X \triangleq \bigcap_ {i = 1} ^ {m} X _ {i}} \left\{f (x) \mid y (x) \in \arg \min _ {y \in \mathbb {R} ^ {\tilde {n}}} \frac {1}{m} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\zeta_ {i} \in \tilde {\mathcal {D}} _ {i}} [ \tilde {h} _ {i} (x, y, \zeta_ {i}) ] \right\}, \quad (\mathbf {F L} _ {b l}) +$$ + +where $f(\bullet) \triangleq \frac{1}{m} \sum_{i=1}^{m} \mathbb{E}_{\xi_i \in \mathcal{D}_i}[\tilde{f}_i(\bullet, y(\bullet), \xi_i)]$ denotes the implicit objective function and $y(\bullet): \mathbb{R}^n \to \mathbb{R}^{\tilde{n}}$ is a single-valued map returning the unique solution to the lower-level problem at $x$ . + +(c) Nondifferentiable nonconvex minimax $FL$ . Finally, we consider the minimax setting, defined as + +$$ +\min _ {x \in X ^ {\triangle} \cap_ {i = 1} ^ {m} X _ {i}} \max _ {y \in \mathbb {R} ^ {\bar {n}}} \frac {1}{m} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\xi_ {i} \in \mathcal {D} _ {i}} [ \tilde {f} _ {i} (x, y, \xi_ {i}) ]. \quad (\mathbf {F L} _ {m m}) +$$ + +where we assume that $y(x) \in \arg \max_{y \in \mathbb{R}^n} \frac{1}{m} \sum_{i=1}^{m} \mathbb{E}_{\zeta_i \in \tilde{\mathcal{D}}_i}[\tilde{f}_i(x,y,\xi_i)]$ is unique for all $x$ . Let $f(\bullet) \triangleq \frac{1}{m} \sum_{i=1}^{m} \mathbb{E}_{\xi_i \in \mathcal{D}_i}[\tilde{f}_i(\bullet,y(\bullet),\xi_i)]$ denote the implicit objective function. Indeed, problem $(\mathbf{FL}_{bl})$ subsumes this minimax formulation when we choose $\tilde{h}_i := -\tilde{f}_i$ and $\tilde{\mathcal{D}}_i := \mathcal{D}_i$ . + +Notably, in an implicit sense, both (b) and (c) are instances of problem (a). However, these hierarchical problems are often complicated by the absence of a closed-form expression for the implicit objective, denoted by $f(\bullet)$ . Indeed, $f(\bullet)$ is often nonsmooth, nonconvex, and unavailable. As such, the absence of both zeroth and first-order information of $f(\bullet)$ in problems (b) and (c) makes the design and analysis of FL methods for these problems more challenging than that for (a). + +Gaps. To the best of our knowledge, there are no known efficient FL algorithms that can contend with both nonsmoothness and nonconvexity in an unstructured sense. Generalizations that can accommodate either a bilevel or a minimax structure also remain unaddressed in FL. + +Goal. To develop a unified FL framework accommodating nondifferentiable nonconvex settings with extensions allowing for bilevel or minimax interactions. We now describe the proposed framework. + +# 1.1 A Smoothed Sampled Zeroth-order Framework + +We consider a smoothed framework for contending with constrained, nonsmooth, and nonconvex regimes. Specifically, given that $f$ is an expectation-valued function and $X$ is a closed and convex set, both of which are defined in $(\mathbf{FL}_{nn})$ , a smoothed unconstrained approximation is given as follows. + +$$ +\left\{ \begin{array}{l} \min \frac {1}{m} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\xi_ {i}} [ \tilde {f} _ {i} (x, \xi_ {i}) ] \\ \text {s u b j e c t t o} x \in \cap_ {i = 1} ^ {m} X _ {i}. \end{array} \right\} \equiv \left\{\min \frac {1}{m} \sum_ {i = 1} ^ {m} [ \mathbb {E} _ {\xi_ {i}} [ \tilde {f} _ {i} (x, \xi_ {i}) ] + \underbrace {\mathbb {I} _ {X _ {i}} (x)} _ {\text {I n d i c a t o r f u n c t i o n}} ] \right\} +$$ + +$$ +\underset {\approx} {\text {S m o o t h i n g}} \left\{\min _ {m} \sum_ {i = 1} ^ {m} \underbrace {\mathbb {E} _ {u _ {i} \in \mathbb {B}} [ \mathbb {E} _ {\xi_ {i}} [ \tilde {f} _ {i} (x + \eta u _ {i} , \xi_ {i}) ] ]} _ {\text {C o n v o l u t i o n s m o o t h i n g}} + \underbrace {\mathbb {I} _ {X _ {i}} ^ {\eta} (x)} _ {\text {M o r e a u s m o o t h i n g}} \right\}. \tag {F L} +$$ + +If $f$ is as defined in $(\mathbf{FL}_{nn})$ and $d(x) \triangleq \frac{1}{m} \sum_{i=1}^{m} \mathbb{I}_{X_i}(x)$ , then $\mathbf{f}$ and its smoothing $\mathbf{f}^\eta$ are defined as + +$$ +\mathbf {f} (x) \triangleq f (x) + d (x) \text {a n d} \mathbf {f} ^ {\eta} (x) \triangleq f ^ {\eta} (x) + d ^ {\eta} (x), \tag {1} +$$ + +where $f^{\eta}(x) \triangleq \frac{1}{m} \sum_{i=1}^{m} [\mathbb{E}_{u_i \in \mathbb{B}}[\mathbb{E}_{\xi_i}[\tilde{f}_i(x + \eta u_i, \xi_i)]]]$ and $d^{\eta}(x) \triangleq \frac{1}{m} \sum_{i=1}^{m} \mathbb{I}_{X_i}^{\eta}(x)$ . + +(i) Clarke-stationarity. Consider the original problem $(\mathbf{FL}_{nn})$ . Under the assumption that the objective of $(\mathbf{FL}_{nn})$ is Lipschitz continuous, then Clarke-stationarity of $x$ w.r.t. $(\mathbf{FL}_{nn})$ requires that $x$ satisfies $0 \in \partial f(x) + \mathcal{N}_{\cap_{i=1}^{m} X_i}(x)$ , where $\partial f(x)$ represents the Clarke generalized gradient [3] of $f$ at $x$ . However, a negative result has been provided regarding the efficient computability of an $\epsilon$ -stationary solution in nonsmooth nonconvex regimes [56]. Consequently, we focus on the smoothed counterpart $(\mathbf{FL}_{nn}^{\eta})$ , a smooth nonconvex problem. In fact, under suitable conditions, it can be shown that stationary point of $(\mathbf{FL}_{nn}^{\eta})$ is a $2\eta$ -Clarke stationary point of the original problem, i.e. + +$$ +[ 0 \in \partial \mathbf {f} ^ {\eta} (x) ] \Longrightarrow [ 0 \in \partial_ {2 \eta} \mathbf {f} (x) ], \tag {2} +$$ + +where $\partial_{2\eta}\mathbf{f}(x)$ represents the $2\eta$ -Clarke generalized gradient of $\mathbf{f}$ at $x$ . + +(ii) Meta-scheme for efficient resolution of $(\mathbf{FL}_{nn}^{\eta})$ . We develop zeroth-order stochastic gradient schemes for resolving $(\mathbf{FL}_{nn}^{\eta})$ . This requires a zeroth-order gradient estimator for $f^{\eta}(x)$ , denoted by $\frac{1}{m}\sum_{i=1}^{m}g_i^\eta(x,\xi_i,v_i)$ where $v_i \in \eta\mathbb{S}$ for $i \in [m]$ and $\mathbb{S}$ denotes the surface of the unit ball. Note that the Moreau smoothing of the indicator function of $X_i$ , denoted by $\mathbb{I}_{X_i}^\eta(x)$ , admits a gradient, defined as $\nabla_x\mathbb{I}_{X_i}^\eta(x) = \frac{1}{\eta}(x - \mathcal{P}_{X_i}(x))$ , where $\mathcal{P}_{X_i}(x) \triangleq \operatorname*{arg\min}_{y \in X_i} \|y - x\|^2$ . The resulting meta-scheme is defined next. + +$$ +x _ {k + 1} = x _ {k} - \gamma \left(\frac {1}{m} \sum_ {i = 1} ^ {m} \left(g _ {i} ^ {\eta} \left(x _ {k}, \xi_ {i, k}, v _ {i, k}\right) + \frac {1}{\eta} \left(x _ {k} - \mathcal {P} _ {X _ {i}} \left(x _ {k}\right)\right)\right)\right), k \geq 0. \quad (\textbf {M e t a - Z O}) +$$ + +(iii) Inexact implicit generalizations for $(\mathbf{FL}_{bl})$ and $(\mathbf{FL}_{mm})$ . In addressing the bilevel problem, unlike in $(\mathbf{FL}_{nn})$ , the clients in $(\mathbf{FL}_{bl})$ may not have access to the exact evaluation of the implicit local objective $f_{i}(\bullet, y(\bullet), \xi_{i})$ . This makes a direct extension of FL schemes challenging. This is because the evaluation of the implicit local function necessitates exact resolution of the lower-level problem. We address this challenge by developing inexact implicit variants of the zeroth-order scheme, where clients compute only an $\varepsilon$ -approximation of $y(x)$ , denoted by $y_{\varepsilon}(x)$ in a federated fashion. This inexact framework, described next, is crucial in addressing hierarchy in bilevel FL formulations. Let $f_{i}^{\eta}(\bullet)$ denote the smoothed implicit local function. We estimate $\nabla_{x}f_{i}^{\eta}(x)$ by approximating the expectation by sampling in (a.1), as follows, while in (a.2), we replace $y(x)$ by an inexact form $y_{\varepsilon}(x)$ . This leads to the introduction of $g_{i}^{\eta,\varepsilon}(x, \xi, v)$ as captured below. + +$$ +\nabla_{x}f_{i}^{\eta}(x) = \underbrace{\mathbb{E}_{\xi_{i},v}\left[g_{i}^{\eta}(x,\xi_{i},v)\right]}_{\text{cannot be tractably evaluated}}\stackrel {(a.1)}{\approx}\underbrace{g_{i}^{\eta}(x,\xi_{i,k},v_{T_{r}})}_{\text{intractable since $y(x)$ is unavailable}}\stackrel{(a.2)}{\approx}g_{i}^{\eta ,\varepsilon}(x,\xi_{i,k},v_{T_{r}}), +$$ + +where $k$ is the local time index, $T_{r}$ is the global time index of communication round $r$ ( $k \geq T_{r}$ ), and $g_{i}^{\eta, \varepsilon}(x, \xi, v) \triangleq \frac{n}{\eta} (\tilde{f}_{i}(x + v, y_{\varepsilon}(x + v), \xi) - \tilde{f}_{i}(x, y_{\varepsilon}(x), \xi)) \frac{v}{\|v\|}$ denotes an inexact implicit zeroth-order gradient. Note that at each round of communication at the upper level, $y_{\varepsilon}(x)$ can be computed using calls to a standard FL method, e.g., FedAvg, in the lower level. Notably, such calls to an FL oracle should be made only at the global step to preserve the communication efficiency of the scheme. It follows that $g_{i}^{\eta, \varepsilon}(x, \xi_{i,k}, v_{T_r}) = \nabla_x f_{i}^{\eta}(x) + \tilde{e}_{i,\varepsilon}$ where the approximation error $\tilde{e}_{i,\varepsilon}$ is a possibly biased random variable. This bias can be then controlled by carefully by updating the accuracy level $\varepsilon$ at each communication round, as we will address in this work. + +# 1.2 Contributions + +Our goal lies in extending (Meta-ZO) to federated nonsmooth nonconvex optimization and then provide generalizations to bilevel and minimax regimes. In each instance, we intend to provide iteration and communication-complexity bounds for computing an $\epsilon$ -accurate $\eta$ -Clarke stationary point of the original problem. Accordingly, we make the following contributions. + +(i) $FL$ for nondifferentiable nonconvex problems. To address $(\mathbf{FL}_{nn})$ with heterogeneous datasets, we develop a Randomized Zeroth-Order Locally-Projected Federated Averaging method $(\mathrm{FedRZO_{nn}})$ . We derive iteration complexity of $\mathcal{O}\left(\frac{1}{m\epsilon^2}\right)$ and communication complexity of $\mathcal{O}\left(m^{3/4}K^{3/4}\right)$ for computing an approximate Clarke stationary point. Such guarantees appear to be new in the context of resolving nondifferentiable nonconvex FL problems, e.g. in training of ReLU neural networks (see Table 2). This is distinct from existing zeroth-order methods, including FedZO [12], that rely on differentiability and $L$ -smoothness of the local loss functions. +(ii) Federated bilevel optimization. In addressing $(\mathbf{FL}_{bl})$ , we develop $\mathrm{FedRZO_{b1}}$ , an inexact implicit extension of $\mathrm{FedRZO_{nn}}$ . By skipping local calls to the lower-level FL oracle, $\mathrm{FedRZO_{b1}}$ is a + +Table 1: Communication complexity for nonsmooth nonconvex, bilevel, and minimax FL problems. $K$ and $\widetilde{K}$ denote the maximum number of iteration used in upper level and lower level, respectively. + +
heterogeneous upper level (this work)lower level (standard FL schemes are employed)total for bilevel and minimax (this work)
O((mK)3/4)(Prop. 1, Thm. 1)i.i.d. (Local SGD [26])O(m)O(m7/4K3/4)
i.i.d. (FedAC [54])O(m1/3)O(m13/12K3/4)
heterogeneous (LFD [20])O(m13/3K1/3)O((mK)11/12)
+ +novel communication-efficient FL scheme with single-timescale local steps, resulting in significant reduction in communication overhead. Table 1 summarizes the communication complexity of this scheme. In all cases, we assume heterogeneous data at the upper level. In the lower level, depending on which conventional FL scheme is employed, we obtain the communication complexity accordingly. + +(iii) Federated minimax optimization. $\mathrm{FedRZO_{b1}}$ can be employed for addressing $(\mathbf{FL}_{mm})$ where $\tilde{h}_i \coloneqq -\tilde{f}_i$ . As such, the complexity results in (ii) hold for solving (nondifferentiable nonconvex)-(strongly concave) FL minimax problems. Such results are new for this class of FL problems. + +Remark 1. There has been recent progress in addressing bilevel and minimax problems in FL, including Local SGDA and FedNest [48, 43]. Our work in (ii) and (iii) has two main distinctions with existing FL methods, described as follows. (1) We do not require the differentiability and $L$ -smoothness of the implicit objective function. This assumption may fail to hold, e.g., in constrained hierarchical FL problems. (2) The existing FL methods for bilevel and minimax problems assume that the lower-level problem is unconstrained. In fact, even in centralized regimes, addressing hierarchical problems where the lower-level constraints depend on $x$ have remained challenging. For example, consider the problem $\min_{x \in [-1,1]} \max_{y \in [-1,1]}$ , $x + y \leq 0$ $x^{2} + y$ that admits a unique solution $(x^{*}, y^{*}) = (0.5, -0.5)$ . Now consider a reversal of min and max in this problem, i.e., $\max_{y \in [-1,1]} \min_{x \in [-1,1]}$ , $x + y \leq 0$ $x^{2} + y$ , admitting the unique solution $(x^{*}, y^{*}) = (-1,1)$ . As a consequence, the well-known primal-dual gradient methods, that have been extensively employed for addressing minimax problems with independent constraint sets, may fail to converge to a saddle-point in minimax problems with coupling constrains. Our proposed algorithmic framework allows for accommodating these challenging problems in FL. + +# 2 Related work + +Table 2: Comparison of our scheme with other FL schemes for nonconvex settings + +
Ref.NonconvexMetricRateComm. roundsAssumption
[53]Smooth||∇xf(x)||2O(G2/√mK)O(m3/4K3/4)Bounded gradients,L-smooth functions
[50]Smooth||∇xf(x)||2O(1/√mK)O(m3/2K1/2)L-smooth functions
[18]Smooth, PL-condf(x) - f*O(1/mK)O(m1/3K1/3)L-smooth functions, PL-condition
This workNonsmooth||∇xfη(x)||2O(1/√mK)O(m3/4K3/4)Lipschitz functions
+ +(i) Nondifferentiable nonconvex optimization. Nonsmooth and nonconvex optimization has been studied extensively with convergence guarantees to Clarke-stationary points via gradient sampling [1, 2] and difference-of-convex approaches [5]. Most complexity and rate guarantees necessitate smoothness of the nonconvex term [14, 51, 8, 9, 28] or convexity of the nonsmooth term [13], while only a few results truly consider nonsmooth nonconvex objective function [32, 4, 42]. (ii) Nondifferentiable nonconvex federated learning. The research on FL was initially motivated by decentralized neural networks where local functions are nondifferentiable and nonconvex [34]. Nevertheless, theoretical guarantees that emerged after FedAvg required either nonsmooth convex or smooth nonconvex local costs, under either iid [46, 58, 50, 47] or non-iid datasets [30, 26], while provable guarantees for FL methods under nonconvexity [58, 53, 19] require $L$ -smoothness of local functions. Unfortunately, these assumptions do not hold either for ReLU neural networks or risk-averse learning and necessitate the use of Clarke calculus [3]. Moreover, existing work on zeroth-order FL methods in convex [31] and nonconvex settings [12] rely on the smoothness properties of the objective function. However, there appear to be no provably convergent FL schemes with complexity guarantees for computing approximate Clarke stationary points of nondifferentiable nonconvex problems. (iii) Federated bilevel optimization. Hyperparameter tuning [21] and its federated counterpart [23] is a crucial, and yet, computationally complex integrant of machine learning (ML) pipeline. Bilevel models + +where the lower-level is a parameterized training model while the upper-level requires selecting the best configuration for the unknown hyperparameters [15, 22, 49]. Solving such hierarchical problems is challenging because of nondifferentiable nonconvex terms and absence of an analytical form for the implicit objective. These challenges exacerbate the development of provable guarantees for privacy-aware and communication-efficient schemes. (iv) Federated minimax optimization. Minimax optimization has assumed relevance in adversarial learning [17, 40, 44] and fairness in ML [57], amongst other efforts. Recently, FL was extended to distributed minimax problems [36, 10, 43], but relatively little exists in nonconvex-strongly concave settings [48, 43]. + +# 3 A Zeroth-order FL Framework for Nondifferentiable Nonconvex Settings + +In this section, we introduce our framework for $(\mathbf{FL}_{nn})$ , where we impose the following assumption. + +Assumption 1. Consider problem $(\mathbf{FL}_{nn})$ . The following hold. + +(i) The function $f_{i}$ is Lipschitz continuous with parameter $L_0 > 0$ for all $i \in [m]$ . +(ii) For any $i \in [m]$ , client $i$ has access to a zeroth-order oracle $\tilde{f}_i(x, \xi_i)$ satisfying the following for every $x$ in an almost-sure sense: +(ii-a) $\mathbb{E}[\tilde{f}_i(x,\xi_i)\mid x] = f_i(x)$ ; (ii-b) There exists $\nu >0$ such that $\mathbb{E}[|\tilde{f}_i(x,\xi_i) - f_i(x)|^2\mid x]\leq \nu^2$ +(iii) The set $X_{i}$ is nonempty, closed, and convex for all $i \in [m]$ . In addition, the following bounded set-dissimilarity condition holds for all $x \in \mathbb{R}^n$ and some scalars $B_{1}$ and $B_{2}$ . + +$$ +\frac {1}{m} \sum_ {i = 1} ^ {m} \operatorname {d i s t} ^ {2} (x, X _ {i}) \leq B _ {1} ^ {2} + B _ {2} ^ {2} \| x - \frac {1}{m} \sum_ {i = 1} ^ {m} \mathcal {P} _ {X _ {i}} (x) \| ^ {2}. \tag {3} +$$ + +We note that the bounded set-dissimilarity condition is naturally analogous to the so-called bounded gradient-dissimilarity condition that has been employed in the literature, e.g., in [25]. In particular, when the bounded gradient-dissimilarity condition is stated for the Moreau smoothing of the indicator function of $X_{i}$ , denoted by $\mathbb{I}_{X_i}^\eta (x)$ , we reach to (3). Notably, condition (3) holds for the generated iterates by the algorithm when, for example, the iterates remain bounded. + +Nonsmooth unconstrained reformulation. Consider an unconstrained reformulation of $(\mathbf{FL}_{nn})$ given by $\min_{x\in \mathbb{R}^n}\mathbf{f}(x)$ (see (1)), where the nonsmoothness of $\mathbf{f}$ arises from that of $f$ and the local indicator functions $\mathbb{I}_{X_i}$ . The minimization of $\mathbf{f}$ is challenging, as noted by recent findings on nonsmooth analysis where it is shown [56] that for a suitable class of nonsmooth functions, computing an $\epsilon$ -stationary point, i.e., a point $\bar{x}$ for which $\mathrm{dist}(0_n,\partial \mathbf{f}(\bar{x}))\leq \epsilon$ , is impossible in finite time. + +Approximate Clarke stationarity. To circumvent this challenge, as a weakening of $\epsilon$ -stationarity, a notion of $(\delta, \epsilon)$ -stationarity is introduced [56] for a vector $\bar{x}$ when $\mathrm{dist}(0_n, \partial_\delta \mathbf{f}(\bar{x})) \leq \epsilon$ , where the set + +$$ +\partial_ {\delta} \mathbf {f} (x) \triangleq \operatorname {c o n v} \left\{\zeta : \zeta \in \partial \mathbf {f} (y), \| x - y \| \leq \delta \right\} +$$ + +denotes the $\delta$ -Clarke generalized gradient of $\mathbf{f}$ at $x$ [16]; i.e. if $x$ is $(\delta, \epsilon)$ -stationary, then there exists a convex combination of gradients in a $\delta$ -neighborhood of $x$ that has a norm of at most $\epsilon$ [41]. + +This discussion naturally leads to the following key question: Can we devise provably convergent FL methods for computing approximate Clarke stationary points of minimization of $\mathbf{f}$ ? The aim of this section is to provide an affirmative answer to this question by proposing a zeroth-order FL method that employs smoothing. To contend with the nonsmoothness, we employ the Moreau-smoothed variant of $\mathbb{I}_X(x)$ , where $X \triangleq \bigcap_{i=1}^{m} X_i$ , and a randomized smoothed variant of $f$ , as shown next. + +Randomized smoothing of loss function. To smoothen the loss function $f$ , we employ a randomized smoothing approach where the smoothing parameter is maintained as sufficiently small. This framework is rooted in the seminal work by Steklov [45], leading to progress in both convex [27, 52, 11] and nonconvex [37] regimes. We consider a smoothing of $f$ , given by $f^{\eta}$ defined as $f^{\eta}(x) \triangleq \mathbb{E}_{u \in \mathbb{B}}[f(x + \eta u)]$ , where $u$ is a random vector in the unit ball $\mathbb{B}$ , defined as $\mathbb{B} \triangleq \{u \in \mathbb{R}^n \mid \| u \| \leq 1\}$ . Further, we let $\mathbb{S}$ denote the surface of the ball $\mathbb{B}$ , i.e., $\mathbb{S} \triangleq \{v \in \mathbb{R}^n \mid \| v \| = 1\}$ and $\eta \mathbb{B}$ and $\eta \mathbb{S}$ denote a ball with radius $\eta$ and its surface, respectively. + +Lemma 1 (Randomized spherical smoothing). Let $h: \mathbb{R}^n \to \mathbb{R}$ be a given continuous function and define $h^\eta(x) \triangleq \mathbb{E}_{u \in \mathbb{B}}[h(x + \eta u)]$ . Then, the following hold. + +(i) $h^\eta$ is continuously differentiable and $\nabla h^\eta (x) = \left(\frac{n}{\eta}\right)\mathbb{E}_{v\in \eta \mathbb{S}}[h(x + v)\frac{v}{\|v\|}]$ for any $x\in \mathbb{R}^n$ . + +Suppose $h$ is Lipschitz continuous with parameter $L_0 > 0$ . Then, the following statements hold. + +(ii) $|h^{\eta}(x) - h^{\eta}(y)| \leq L_0 \| x - y \|$ for all $x, y \in \mathbb{R}^n$ ; (iii) $|h^{\eta}(x) - h(x)| \leq L_0 \eta$ for all $x \in \mathbb{R}^n$ ; (iv) $\| \nabla h^{\eta}(x) - \nabla h^{\eta}(y) \| \leq \frac{L_0 n}{\eta} \| x - y \|$ for all $x, y \in \mathbb{R}^n$ . + +The discussion leads to the consideration of the following smoothed federated problem. + +Definition 1 (Unconstrained smoothed approximate problem). Given $\eta >0$ , consider an unconstrained smoothed problem given as + +$$ +\min _ {x \in \mathbb {R} ^ {n}} \mathbf {f} ^ {\eta} (x) \left\{\triangleq \frac {1}{m} \sum_ {i = 1} ^ {m} \mathbf {f} _ {i} ^ {\eta} (x) \right\}, \text {w h e r e} \mathbf {f} _ {i} ^ {\eta} (x) \triangleq \mathbb {E} _ {\xi_ {i}, u _ {i} \in \mathbb {B}} [ \tilde {f} _ {i} (x + \eta u _ {i}, \xi_ {i}) ] + \frac {1}{2 \eta} \operatorname {d i s t} ^ {2} (x, X _ {i}). \tag {4} +$$ + +# Algorithm 1 Randomized Zeroth-Order Locally-Projected Federated Averaging (FedRZOnn) + +1: input: Server chooses a random initial point $\hat{x}_0 \in X$ , stepsize $\gamma$ , smoothing parameter $\eta$ , synchronization indices $T_0 \coloneqq 0$ and $T_r \geq 1$ , where $r \geq 1$ is the communication round index +2: for $r = 0,1,\ldots$ do +3: Server broadcasts $\hat{x}_r$ to all clients: $x_{i,T_r} \coloneqq \hat{x}_r$ , $\forall i \in [m]$ +4: for $k = T_r, \dots, T_{r+1} - 1$ do in parallel by clients +5: Client $i$ generates the random replicates $\xi_{i,k} \in \mathcal{D}_i$ and $v_{i,k} \in \eta \mathbb{S}$ +6: $g_{i,k}^{\eta} := \frac{n}{\eta^{2}}\left(\tilde{f}_{i}(x_{i,k} + v_{i,k},\xi_{i,k}) - \tilde{f}_{i}(x_{i,k},\xi_{i,k})\right)v_{i,k}$ +7: Client $i$ does a local update as $x_{i,k + 1} \coloneqq x_{i,k} - \gamma \left(g_{i,k}^{\eta} + \frac{1}{\eta}\left(x_{i,k} - \mathcal{P}_{X_i}(x_{i,k})\right)\right)$ +8: end for +9: Server receives $x_{i,T_{r+1}}$ from all clients and aggregates, i.e., $\hat{x}_{r+1} := \frac{1}{m}\sum_{i=1}^{m}x_{i,T_{r+1}}$ + +# 10: end for + +To address (4), we propose $\mathrm{FedRZO_{nn}}$ given by Algorithm 1. Here, client $i$ employs a zeroth-order stochastic gradient of the form $g_{i,k}^{\eta} \triangleq \frac{n}{\eta^{2}} \left( \tilde{f}_{i}(x_{i,k} + v_{i,k}, \xi_{i,k}) - \tilde{f}_{i}(x_{i,k}, \xi_{i,k}) \right) v_{i,k}$ , augmented by the gradient of the Moreau smoothed function. The random sample $v_{i,k} \in \eta \mathbb{S}$ is locally generated by each client $i$ , allowing for randomized smoothing. This is indeed in view of Lemma 1 (i) that facilitates the development of a randomized zeroth-order gradient. + +We define $\bar{x}_k \triangleq \frac{\sum_{i=1}^m x_{i,k}}{m}$ as an auxiliary sequence to denote the averaged iterates of the clients. + +Definition 2. Consider Algorithm 1. Let $H > 0$ denote an upper bound on the number of local steps per round, i.e., $H \geq \max_{r = 0,1,\ldots} |T_{r + 1} - T_r|$ . Throughout, we assume that $H$ is finite. + +Proposition 1. Consider Algorithm 1. Let Assumption 1 hold. + +(i) [Error bound] Suppose $\gamma \leq \min \left\{\frac{\eta}{4L_0n},\frac{1}{4H},\frac{\eta}{12\sqrt{3}B_2(L_0n + 1)H}\right\}$ . Let $k^{*}$ denote an integer drawn uniformly at random from $\{0,\dots ,K\}$ and $\mathbf{f}^{\eta ,*}\triangleq \inf_{x}\mathbf{f}^{\eta}(x)$ . Then, + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \nabla \mathbf {f} ^ {\eta} (\bar {x} _ {k ^ {*}}) \| ^ {2} \right] \leq \frac {8 \left(\mathbb {E} [ \mathbf {f} ^ {\eta} (\bar {x} _ {0}) ] - \mathbf {f} ^ {\eta , *}\right)}{\gamma (K + 1)} + \frac {1 2 \gamma L _ {0} n ^ {3}}{\eta m} \left(\frac {2 \nu^ {2}}{\eta^ {2}} + L _ {0} ^ {2}\right) \\ + \frac {3 6 H ^ {2} \gamma^ {2} (L _ {0} n + 1) ^ {2}}{\eta^ {2}} \left(\frac {6 n ^ {2} \nu^ {2} + 2 B _ {1} ^ {2}}{\eta^ {2}} + (3 + 4 B _ {2} ^ {2}) L _ {0} ^ {2} n ^ {2}\right). \\ \end{array} +$$ + +(ii) [Iteration complexity] Let $\gamma \coloneqq \sqrt{\frac{m}{K}}$ and $H\coloneqq \left\lceil \sqrt[4]{\frac{K}{m^3}}\right\rceil$ where $\eta >0$ . Let $\epsilon >0$ be an arbitrary scalar and $K_{\epsilon}$ denote the number of iterations such that $\mathbb{E}\left[\| \nabla f^{\eta}(\bar{x}_{k^{*}})\|^{2}\right]\leq \epsilon$ . Then, the iteration complexity is $K_{\epsilon}\coloneqq \mathcal{O}\left(\left(\frac{L_0n^3\nu^2}{\eta^3} +\frac{L_0^3n^3}{\eta} +\frac{L_0^2n^4\nu^2}{\eta^4} +\frac{L_0^2n^2B_1^2}{\eta^4} +\frac{B_2^2L_0^4n^4}{\eta^2}\right)^2\frac{1}{m\epsilon^2}\right)$ . +(iii) [Communication complexity] Suppose $K_{\epsilon} \geq m^{3}$ . Then, the number of communication rounds to achieve the accuracy level in (ii) is $R \coloneqq \mathcal{O}\left((mK_{\epsilon})^{3/4}\right)$ . + +We now formally relate the original nonsmooth problem and its smoothed counterpart. + +Proposition 2. Consider problem (4) and let Assumption 1 hold. + +(i) Assume that $X_{i} = \mathbb{R}^{n}$ for all $i\in [m]$ . Then, for any $\eta >0$ , we have $\nabla f^{\eta}(x)\in \partial_{2\eta}f(x)$ +(ii) Assume that the sets $X_{i}$ are identical for all $i\in [m]$ . Let $\delta >0$ be an arbitrary scalar. If $\nabla \mathbf{f}^{\eta}(x) = 0$ and $\eta \leq \frac{\delta}{\max\{2,nL_0\}}$ , then $0_{n}\in \partial_{\delta}\left(f + \mathbb{I}_{X}\right)(x)$ . + +# 4 Extensions to Bilevel and Minimax FL + +4.1 Nondifferentiable nonconvex bilevel FL. In this section, we consider the federated bilevel optimization problem defined earlier as $(\mathbf{FL}_{bl})$ . We consider the following smoothed implicit problem. + +Definition 3 (Unconstrained smoothed implicit problem). Given $\eta >0$ , consider an unconstrained smoothed implicit problem given as + +$$ +\left. \min _ {x \in \mathbb {R} ^ {n}} \mathbf {f} ^ {\eta} (x) \left\{\triangleq \frac {1}{m} \sum_ {i = 1} ^ {m} \left(\mathbb {E} _ {\xi_ {i}, u \in \mathbb {B}} [ \tilde {f} _ {i} (x + \eta u, y (x + \eta u), \xi_ {i}) ] + \frac {1}{2 \eta} \operatorname {d i s t} ^ {2} (x, X _ {i})\right) \right\}. \right. \tag {5} +$$ + +Assumption 2. Consider problem $(\mathbf{FL}_{bl})$ . Let the following assumptions hold. + +(i) For all $i \in [m]$ , $\tilde{f}_i(\bullet, y, \xi_i)$ is $L_{0,x}^f(\xi_i)$ -Lipschitz for any $y$ and $\tilde{f}_i(x, \bullet, \xi_i)$ is $L_{0,y}^f(\xi_i)$ -Lipschitz for any $x$ , where $L_{0,x}^f \triangleq \max_{i=1,\dots,m} \sqrt{\mathbb{E}[(L_{0,x}^f(\xi_i))^2]} < \infty$ and $L_{0,y}^f \triangleq \max_{i=1,\dots,m} \sqrt{\mathbb{E}[(L_{0,y}^f(\xi_i))^2]} < \infty$ . +(ii) For all $i \in [m]$ , for any $x$ , $h_i(x, \bullet)$ is $L_{1,y}^h$ -smooth and $\mu_h$ -strongly convex. Further, for any $y$ , the map $\nabla_y h_i(\bullet, y)$ is Lipschitz continuous with parameter $L_{0,x}^{\nabla h}$ . +(iii) The sets $X_{i}$ , for $i \in [m]$ , satisfy Assumption 1 (iii). + +The outline of $\mathrm{FedRZO_{bl}}$ is presented in Algorithm 2. We make the following remarks: (i) At each global step, the server makes two calls to a lower-level FL oracle to in exactly compute $y(\hat{x}_r + v_{T_r})$ and $y(\hat{x}_r)$ . These lower-level FL calls are performed by the same clients, on the lower-level FL problem. (ii) The inexactness error is carefully controlled by terminating the lower-level FL oracle after $\mathcal{O}(r)$ number of iterations, where $r$ denotes the upper-level communication round index. (iii) $\mathrm{FedRZO_{bl}}$ skips the calls to the lower-level FL oracle during the local steps. To accommodate this, unlike in $\mathrm{FedRZO_{nn}}$ , here we employ a global randomized smoothing denoted by $v_{T_r}$ during the communication round $r$ in the upper level. + +Algorithm 2 Randomized Implicit Zeroth-Order Federated Averaging (FedRZOb1) +1: input: Server chooses a random $\hat{x}_0\in X$ , stepsize $\gamma$ , smoothing parameter $\eta$ , synchronization indices $T_0\coloneqq 0$ and $T_{r}\geq 1$ , where $r\geq 1$ is the upper-level communication round index +2: for $r = 0,1,\ldots$ do +3: Server generates a random replicate $v_{T_r}\in \eta \mathbb{S}$ +4: Server calls FedAvg to receive $y_{\varepsilon_r}(\hat{x}_r + v_{T_r})$ and $y_{\varepsilon_r}(\hat{x}_r)$ , denoting the inexact evaluations of $y(\hat{x}_r + v_{T_r})$ and $y(\hat{x}_r)$ , respectively. +5: Server broadcasts $\hat{x}_r,\hat{x}_r + v_{T_r},y_{\varepsilon_r}(\hat{x}_r)$ , and $y_{\varepsilon_r}(\hat{x}_r + v_{T_r})$ to all clients; $x_{i,T_r}\coloneqq \hat{x}_r$ $\forall i$ +6: for $k = T_r,\dots ,T_{r + 1} - 1$ do in parallel by clients +7: Client $i$ generates the random replicates $\xi_{i,k}\in \mathcal{D}_i$ +8: $g_{i,k}^{\eta ,\varepsilon_r}\coloneqq \frac{n}{\eta^2}\left(\tilde{f}_i(x_{i,k} + v_{T_r},y_{\varepsilon_r}(\hat{x}_r + v_{T_r}),\xi_{i,k}) - \tilde{f}_i(x_{i,k},y_{\varepsilon_r}(\hat{x}_r),\xi_{i,k})\right)v_{T_r}$ +9: Client $i$ does a local update as $x_{i,k + 1}\coloneqq x_{i,k} - \gamma \left(g_{i,k}^{\eta ,\varepsilon_r} + \frac{1}{\eta}\left(x_{i,k} - \mathcal{P}_{X_i}(x_{i,k})\right)\right)$ +10: end for +11: Server receives $x_{i,T_{r + 1}}$ from all clients and aggregates, i.e., $\hat{x}_{r + 1}\coloneqq \frac{1}{m}\sum_{i = 1}^{m}x_{i,T_{r + 1}}$ +12: end for + +Theorem 1 (FedRZOb1 when using an arbitrary inexact FL method for lower-level). Consider Algorithm 2. Let Assumption 2 hold, $k^{*}$ be chosen uniformly at random from $0,\ldots ,K\coloneqq T_R - 1$ and $\gamma \leq \min \left\{\frac{\max\left\{2,\sqrt{0.1\Theta_3},4B_2\sqrt{3\Theta_2},4B_2\sqrt{3\Theta_3}\right\}^{-1}}{4H},\frac{\eta}{24(L_0^{\mathrm{imp}}n + 1)}\right\}$ . Let $\varepsilon_r$ denote the inexactness in obtaining the lower-level solution, i.e., $\mathbb{E}\left[\| y_{\varepsilon_r}(x) - y(x)\| ^2\mid x\right]\leq \varepsilon_r$ for $x\in \cup_{r = 0}^{R}\{\hat{x}_r,\hat{x}_r + v_{T_r}\}$ . + +(i) [Error bound] We have + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \nabla \mathbf {f} ^ {\eta} (\bar {x} _ {k ^ {*}}) \| ^ {2} \right] \leq 8 (\gamma K) ^ {- 1} (\mathbb {E} \left[ \mathbf {f} ^ {\eta} (x _ {0}) \right] - \mathbf {f} ^ {\eta , *}) + \frac {8 \gamma \Theta_ {1}}{m} + 8 H ^ {2} \gamma^ {2} \max \{\Theta_ {2}, \Theta_ {3} \} \Theta_ {5} \\ + 8 \left(H ^ {2} \gamma^ {2} \max \{\Theta_ {2}, \Theta_ {3} \} \Theta_ {4} + \Theta_ {3}\right) H \frac {\sum_ {r = 0} ^ {R - 1} \varepsilon_ {r}}{K}, \\ \end{array} +$$ + +$$ +\begin{array}{l} \text {w h e r e} \Theta_ {1} := \frac {3 (L _ {0} ^ {\operatorname* {i m p}} n + 1) n ^ {2}}{2 \eta} (L _ {0} ^ {\operatorname* {i m p}}) ^ {2}, \Theta_ {2} := \frac {5 (L _ {0} ^ {\operatorname* {i m p}} n + 1) ^ {2}}{8 \eta^ {2}}, \Theta_ {3} := \left(\frac {L _ {0 , x} ^ {\nabla h}}{\mu h}\right) ^ {2} \frac {2 0 n ^ {2}}{\eta^ {2}} (L _ {0, y} ^ {f}) ^ {2}, \\ \Theta_ {4} := \frac {9 6 n ^ {2}}{\eta^ {2}} (L _ {0, y} ^ {f}) ^ {2}, \text {a n d} \Theta_ {5} := \frac {4 8 B _ {1} ^ {2}}{\eta^ {2}} + (9 6 B _ {2} ^ {2} + 1) (L _ {0} ^ {\text {i m p}}) ^ {2} n ^ {2}. \\ \end{array} +$$ + +(ii) [Iteration complexity] Let $\gamma \coloneqq \sqrt{\frac{m}{K}}$ and $H\coloneqq \left\lceil \sqrt[4]{\frac{K}{m^3}}\right\rceil$ where $\eta >0$ . Let $\epsilon >0$ be an arbitrary scalar and $K_{\epsilon}$ denote the number of iterations such that $\mathbb{E}\left[\| \nabla \mathbf{f}^{\eta}(\bar{x}_{k^{*}})\|^{2}\right]\leq \epsilon$ . Also, suppose we employ an FL method in the lower level that achieves a sublinear convergence rate with a linear speedup in terms of the number of clients, i.e., $\varepsilon_r\coloneqq \tilde{\mathcal{O}} (\frac{1}{m\tilde{T}_{\tilde{R}_r}})$ where $\tilde{R}_r$ denotes the number of communication rounds performed in the lower-level FL method when it is called in round $r$ of FedRZOb1 and $\tilde{T}_{\tilde{R}_r}$ denotes the number of iterations performed in the lower-level FL scheme to do $\tilde{R}_r$ rounds of upper-level communication. Further, suppose $\tilde{T}_{\tilde{R}_r}\coloneqq \tilde{\mathcal{O}}\left(m^{-1}(r + 1)^{\frac{2}{3}}\right)$ . Then, the iteration complexity of FedRZOb1 (upper level) is $K_{\epsilon}\coloneqq \tilde{\mathcal{O}}\left((\Theta_1^2 +\max \{\Theta_2,\Theta_3\} ^2\Theta_5^2 +\max \{\Theta_2,\Theta_3\} ^2\Theta_4^2 +\Theta_3^2\right)\frac{1}{m\epsilon^2})$ +(iii) [Communication complexity] Suppose $K_{\epsilon} \geq m^{3}$ . Then, the number of communication rounds in FedRZOb1 (upper-level only) to achieve the accuracy level in (ii) is $R \coloneqq \mathcal{O}\left((mK_{\epsilon})^{3/4}\right)$ . + +Remark 2. (i) Importantly, Theorem 1 is equipped with explicit communication complexity $R \coloneqq \mathcal{O}\left((mK_{\varepsilon})^{3 / 4}\right)$ , matching that of single-level nonsmooth nonconvex problems in Proposition 1. This implies that as long as the lower-level FL oracle has a rate of $\varepsilon_r \coloneqq \tilde{\mathcal{O}}\left(\frac{1}{m\tilde{T}_{R_r}}\right)$ , the inexactness does not affect the communication complexity bounds of the method in the upper level. + +(ii) As noted in the assumptions, in the upper level, we allow for heterogeneity. To elaborate on overall communication complexity of $\mathrm{FedRZO_{bl}}$ , we provide detailed complexity results in Table 1 for three cases, where we employ Local SGD [26], FedAC [54], and LFD [20] for the lower-level scheme. All these schemes meet the linear speedup condition in Theorem 1. Notably, among these schemes, the last scheme allows for the presence of heterogeneity. As an example, we present in Algorithm 3, the outline of FedAvg, if employed in step 4 of Algorithm 2. + +Algorithm 3 FedAvg $(x,r,y_{0,r},m,\tilde{\gamma},\tilde{H},\tilde{T}_{\tilde{R}})$ for lower level +```latex +1: input: $x, r$ , server chooses a random initial point $\hat{y}_0 \coloneqq y_{0,r} \in \mathbb{R}^{\tilde{n}}$ , $a_r \coloneqq \max \{m, 4\kappa_h, r\} + 1$ where $\kappa_h \coloneqq \frac{L_{1,y}^h}{\mu_h}$ , $\tilde{\gamma} \coloneqq \frac{1}{\mu_h a_r}$ , $\tilde{T}_{\tilde{R}_r} \coloneqq 2a_r \ln(a_r)$ , and $\tilde{H} \coloneqq \lceil \frac{\tilde{T}_{\tilde{R}_r}}{m} \rceil$ +``` + +2: for $\tilde{r} = 0,1,\dots,R_r - 1$ do +3: Server broadcasts $\hat{y}_{\tilde{T}}$ to all agents: $y_{i,\tilde{T}_{\tilde{T}}} \coloneqq \hat{y}_{\tilde{T}}, \forall i$ +4: for $t = \tilde{T}_{\tilde{r}},\dots ,\tilde{T}_{\tilde{r} +1} - 1$ do in parallel by agents +5: Agent $i$ does a local update as $y_{i,t + 1}\coloneqq y_{i,t} - \tilde{\gamma}\nabla_yh_i(x,y_{i,t},\tilde{\xi}_{i,t})$ +6: Agent $i$ sends $x_{i,T_r + 1}$ to the server +7: end for +8: Server aggregates, i.e., $\hat{y}_{\tilde{r} +1}\coloneqq \frac{1}{m}\sum_{i = 1}^{m}y_{i,T_{\tilde{r} +1}}$ +9: end for + +(iii) We use $L_0^{\mathrm{imp}}(\xi_i)$ to denote the Lipschitz continuity constant of the random local implicit function $\tilde{f}_i(x,y(x),\xi_i)$ , and let $L_0^{\mathrm{imp}} \triangleq \max_{i = 1,\dots,m} \sqrt{\mathbb{E}[(L_0^{\mathrm{imp}}(\xi_i))^2]} < \infty$ . As shown in supplementary material, $L_0^{\mathrm{imp}}(\xi_i)$ can be obtained explicitly in terms of problem parameters. + +Remark 3. A technical challenge in designing Algorithm 2 is that an inexact evaluation of $y(x)$ must be avoided during the local steps. This is because we consider bilevel problems of the form $(\mathbf{F}\mathbf{L}_{bl})$ + +where both levels are distributed. Because of this, the inexact evaluation of $y(x)$ by each client in the local step in the upper level would require significant communications that is undesirable in the FL framework. We carefully address this challenge by introducing delayed inexact computation of $y(x)$ . In step 8 of Algorithm 2, we note how $y_{\varepsilon}$ is evaluated at $\hat{x}_r + v_{T_r}$ which is a different than the vector used by the client, i.e., $x_{i,k} + v_{T_r}$ . At each communication round in the upper level, we only compute $y(x)$ in exactly twice in the global step and then use this delayed information in the local steps. This delayed inexact computation of $y$ renders a challenge in the convergence analysis which makes the design and analysis of Algorithm 2 a non-trivial extension of Algorithm 1. + +4.2 Nondifferentiable nonconvex-strongly concave minimax FL. Next, we consider the decentralized federated minimax problem of the form $(\mathbf{FL}_{mm})$ introduced earlier. This problem is indeed a zero-sum game and can be viewed as an instance of the non-zero sum game $(\mathbf{FL}_{bl})$ where $\tilde{h}_i \coloneqq -\tilde{f}_i$ . + +Corollary 1. Consider Algorithm 2 for solving $(\mathbf{FL}_{mm})$ . Let Assumption 2 hold for $\tilde{h}_i \coloneqq -\tilde{f}_i$ and $\mathcal{D}_i \coloneqq \hat{\mathcal{D}}_i$ . Then, all the results in Theorem 1 hold true. + +# 5 Experiments + +We present three sets of experiments to validate the performance of the proposed algorithms. In Section 5.1, we implement Algorithm 1 on ReLU neural networks (NNs) and compare it with some recent FL methods. In Sections 5.2 and 5.3 we implement Algorithm 2 on federated hyperparameter learning and a minimax formulation in FL. Throughout, we use the MNIST dataset. Additional experiments on a higher dimensional dataset (i.e., Cifar-10) are presented in supplementary material. + +![](images/9d69f984610289588d1cce2817c49d9d8aed1643bb3e9504c001ba886868d017.jpg) +Figure 1: Performance of $\mathrm{FedRZO_{nn}}$ on a single-layer ReLU NN in terms of communication rounds for different no. of local steps and different values of the smoothing parameter $\eta$ . $\mathrm{FedRZO_{nn}}$ benefits from larger number of local steps and shows robustness with respect to the choice of $\eta$ . + +5.1 Federated training of ReLU NNs. We implement FedRZOnn for federated training in a single-layer ReLU NN with $N_{1}$ neurons. This is a nondifferentiable nonconvex optimization problem, aligning with $(\mathbf{FL}_{nn})$ and taking the form $\min_{x:=(Z,w)\in\mathcal{X}}\frac{1}{2m}\sum_{i=1}^{m}\sum_{\ell\in\mathcal{D}_{i}}(v_{i,\ell}-\sum_{q=1}^{N_{1}}w_{q}\sigma(Z_{\bullet,q}U_{i,\ell}))^{2}+\frac{\lambda}{2}\left(\|Z\|_{F}^{2}+\|w\|^{2}\right)$ , where $m$ denotes the number of clients, $Z\in\mathbb{R}^{N_{1}\times N_{0}}$ , $w\in\mathbb{R}^{N_{1}}$ , $N_{0}$ is the feature dimension, $U_{i,\ell}\in\mathbb{R}^{N_{0}}$ and $v_{i,\ell}\in\{-1,1\}$ are the $\ell$ th input and output training sample of client $i$ , respectively, $\sigma(x):=\max\{0,x\}$ , and $\lambda$ is the regularization parameter. + +Setup. We distribute the training dataset among $m \coloneqq 5$ clients and implement $\mathrm{FedRZO_{mn}}$ for the FL training with $N_{1} \coloneqq 4$ neurons under three different settings for the smoothing parameter, $\eta \in \{0.1, 0.01, 0.001\}$ , $\gamma \coloneqq 10^{-5}$ , and $\lambda \coloneqq 0.01$ . We study the performance of the method under different number of local steps with $H \in \{1, 5, 10, 20\}$ . + +Results and insights. Figure 1 presents the first set of numerics for $\mathrm{FedRZO_{nn}}$ under the aforementioned settings. In terms of communication rounds, we observe that the performance of the method improves by using a larger number of local steps. In fact, in the case where $H\coloneqq 1$ , $\mathrm{FedRZO_{nn}}$ is equivalent to a parallel zeroth-order SGD that employs communication among clients at each iteration, resulting in a poor performance, motivating the need for the FL framework. In terms of $\eta$ , while we observe robustness of the scheme in terms of the original loss function, we also note a slight improvement in the empirical speed of convergence in early steps, as $\eta$ increases. This is indeed aligned with the dependence of convergence bound in Proposition 1 on $\eta$ . + +Comparison with other FL methods. While we are unaware of other FL methods for addressing nondifferentiable nonconvex problems, we compare $\mathrm{FedRZO_{nn}}$ with other FL methods including FedAvg [34], FedProx [29], FedMSPP [55], and Scaffnew [35] when applied on a NN with a smooth rectifier. Details of these experiments are provided in the supplementary material. + +5.2 Federated hyperparameter learning. To validate $\mathrm{FedRZO_{b1}}$ , we consider the following FL hyperparameter learning problem for binary classification using logistic loss. + +$$ +\min _ {x \in X, y \in \mathbb {R} ^ {n}} f (x, y) \triangleq \frac {1}{m} \sum_ {i = 1} ^ {m} \sum_ {\ell \in \mathcal {D} _ {i}} \log \left(1 + \exp (- v _ {i, \ell} U _ {i, \ell} ^ {T} y)\right) +$$ + +subject to $y \in \arg \min_{y \in \mathbb{R}^n} h(x, y) \triangleq \frac{1}{m} \sum_{i=1}^{m} \left( \sum_{\tilde{\ell} \in \tilde{D}_i} \log \left( 1 + \exp(-v_{i,\tilde{\ell}} U_{i,\tilde{\ell}}^T y) \right) + x_i \frac{\|y\|^2}{2} \right)$ , + +where $m$ is number of clients, $x$ denotes the regularization parameter for client $i$ , $U_{i,\ell} \in \mathbb{R}^n$ and $v_{i,\ell} \in \{-1,1\}$ are the $\ell$ th input and output testing sample of client $i$ , respectively, $U_{i,\ell'} \in \mathbb{R}^n$ and $v_{i,\ell'} \in \{-1,1\}$ are the $\ell'$ th input and output training sample of client $i$ , respectively. The constraint set $X$ is considered as $X := \{x \in \mathbb{R}^m \mid x \geq \underline{\mu} \mathbf{1}_m\}$ , where $\underline{\mu} > 0$ . This problem is an instance of $(\mathbf{FL}_{bl})$ , where the lower-level problem is $\ell_2$ -regularized and the regularization parameter is a decision variable of the upper-level FL problem. The convergence results are presented in Fig. 2 (left). + +5.3 Fair classification learning. Here, we study the convergence of $\mathrm{FedRZO_{b1}}$ in minimax FL. We consider solving an FL minimax formulation of the fair classification problem [39] of the form + +$$ +\min _ {x \in \mathbb {R} ^ {n}} \max _ {y \in \mathbb {R} ^ {c}} \frac {1}{m} \sum_ {i = 1} ^ {m} \sum_ {c = 1} ^ {C} \sum_ {\ell \in \mathcal {D} _ {i, c}} (v _ {i, \ell} - \sum_ {q = 1} ^ {N _ {1}} w _ {q} \sigma (Z _ {\bullet , q} U _ {i, \ell})) ^ {2} - \frac {\lambda}{2} \| y \| ^ {2}, +$$ + +where $c$ denotes the class index and $\mathcal{D}_{i,c}$ denotes the a portion of local dataset associated with client $i$ that is comprised of class $c$ samples. The loss function follows the same formulation in Section 5.1, where an ReLU neural network is employed. This problem is nondifferentiable and nonconvex-strongly concave, fitting well with the assumptions in our work in addressing minimax FL problems. The performance of our algorithm is presented in Figure 2 (right). + +![](images/97af97b557b89805db6d586b67358754c155df492022ec497fbb14df2f2062d4.jpg) +Figure 2: (Left) Convergence of $\mathrm{FedRZO_{b1}}$ in hyperparameter FL for $\ell_2$ regularized logistic loss, where we plot the loss function on test data for different values of local steps with $95\%$ CIs. (Right) Convergence of $\mathrm{FedRZO_{b1}}$ in minimax FL, where we present test results in solving a nondifferentiable nonconvex-strongly concave FL minimax formulation of the fair classification problem [39]. + +![](images/3bcd6ad9b4aeaa5aa66979124b11899dda8961b89eedeb0e55ea1dddcbc40eea.jpg) + +# 6 Concluding Remarks + +Federated learning has assumed growing relevance in ML. However, most practical problems are characterized by the presence of local objectives, jointly afflicted by nonconvexity and nondifferentiability, precluding resolution by most FL schemes, which can cope with nonconvexity in only smooth settings. We resolve this gap via a zeroth-order communication-efficient FL framework that can contend with both nondifferentiability and nonsmoothness with rate and complexity guarantees for computing approximate Clarke-stationary points. Extensions to nonconvex bilevel and nonconvex-strongly concave minimax settings are developed via inexact generalizations. + +# 7 Acknowledgments + +We acknowledge the funding support from the U.S. Department of Energy under grant #DE-SC0023303, and the U.S. Office of Naval Research under grants #N00014-22-1-2757 and #N00014-22-1-2589. We also would like to thank the four anonymous referees for their constructive suggestions. + +# References + +[1] J. V. Burke, A. S. Lewis, and M. L. Overton. Approximating subdifferentials by random sampling of gradients. Math. Oper. Res., 27(3):567-584, 2002. +[2] J. V. Burke, A. S. Lewis, and M. L. Overton. A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optim., 15(3):751-779, 2005. +[3] F. H. Clarke. Generalized gradients and applications. Transactions of the American Mathematical Society, 205:247-262, 1975. +[4] S. Cui, U. V. Shanbhag, and F. Yousefian. Complexity guarantees for an implicit smoothing-enabled method for stochastic mpecs. Math. Program., 198(2):1153-1225, 2023. +[5] Y. Cui and J. S. Pang. Modern nonconvex nondifferentiable optimization, volume 29 of MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Mathematical Optimization Society, Philadelphia, PA, [2022] ©2022. +[6] S. Dafermos. Sensitivity analysis in variational inequalities. Mathematics of Operations Research, 13(3):421-434, 1988. +[7] R. Das, A. Acharya, A. Hashemi, S. Sanghavi, I. S. Dhillon, and U. Topcu. Faster non-convex federated learning via global and local momentum, 2021. +[8] D. Davis and D. Drusvyatskiy. Stochastic model-based minimization of weakly convex functions. SIAM J. Optim., 29(1):207-239, 2019. +[9] D. Davis and B. Grimmer. Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems. SIAM J. Optim., 29(3):1908-1930, 2019. +[10] Y. Deng, M. M. Kamani, and M. Mahdavi. Distributionally robust federated averaging. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 15111-15122. Curran Associates, Inc., 2020. +[11] J. C. Duchi, P. L. Bartlett, and Martin J. Wainwright. Randomized smoothing for stochastic optimization. SIAM J. on Optimization, 22(2):674-701, 2012. +[12] W. Fang, Z. Yu, Y. Jiang, Y. Shi, C. N Jones, and Y. Zhou. Communication-efficient stochastic zeroth-order optimization for federated learning. IEEE Transactions on Signal Processing, 70:5058-5073, 2022. +[13] A. Gasnikov, A. Novitskii, V. Novitskii, F. Abdukhakimov, D. Kamzolov, A. Beznosikov, M. Takáč, P. Dvurechensky, and B. Gu. The power of first-order smooth optimization for black-box non-smooth problems. arXiv preprint arXiv:2201.12289, 2022. +[14] S. Ghadimi, G. Lan, and H. Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Math. Programming, 155(1-2):267-305, 2016. +[15] S. Ghadimi and M. Wang. Approximation methods for bilevel programming. arXiv preprint: https://arxiv.org/abs/1802.02246. +[16] A. A. Goldstein. Optimization of Lipschitz continuous functions. Math. Programming, 13(1):14-22, 1977. +[17] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014. +[18] F. Haddadpour, M. M. Kamani, M. Mahdavi, and V. Cadambe. Local SGD with periodic averaging: Tighter analysis and adaptive synchronization. Advances in Neural Information Processing Systems, 32, 2019. + +[19] F. Haddadpour, M. M. Kamani, M. Mahdavi, and V. Cadambe. Trading redundancy for communication: Speeding up distributed SGD for non-convex optimization. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2545–2554. PMLR, 09–15 Jun 2019. +[20] F. Haddadpour and M. Mahdavi. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425, 2019. +[21] E. Hazan, A. Klivans, and Y. Yuan. Hyperparameter optimization: A spectral approach. 6th International Conference on Learning Representations (ICLR), 2018. +[22] K. Ji, J. Yang, and Y. Liang. Bilevel optimization: Convergence analysis and enhanced design. arXiv preprint: https://arxiv.org/pdf/2010.07962.pdf. +[23] P. Kairouz, H. Brendan McMahan, et al. Advances and open problems in federated learning. CoRR, abs/1912.04977, 2019. +[24] S. P. Karimireddy, M. Jaggi, S. Kale, M. Mohri, S. J. Reddi, S. U Stich, and A. T. Suresh. Breaking the centralized barrier for cross-device federated learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. +[25] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pages 5132-5143. PMLR, 2020. +[26] A. Khaled, K. Mishchenko, and P. Richtárik. Tighter theory for local SGD on identical and heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pages 4519-4529. PMLR, 2020. +[27] H. Lakshmanan and D. Farias. Decentralized recourse allocation in dynamic networks of agents. SIAM Journal on Optimization, 19(2):911-940, 2008. +[28] J. Lei and U. V. Shanbhag. Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes. Optimization Methods and Software, pages 1–31, 2020. +[29] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429-450, 2020. +[30] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang. On the convergence of FedAvg on Non-IID data. In International Conference on Learning Representations, 2020. +[31] Zan Li and Li Chen. Communication-efficient decentralized zeroth-order method on heterogeneous data. In 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP), pages 1-6. IEEE, 2021. +[32] T. Lin, Z. Zheng, and M. Jordan. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. Advances in Neural Information Processing Systems, 35:26160-26175, 2022. +[33] D. Q. Mayne and E. Polak. Nondifferential optimization via adaptive smoothing. Journal of Optimization Theory and Applications, 43(4):601-613, 1984. +[34] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1273–1282. PMLR, 20–22 Apr 2017. + +[35] K. Mishchenko, G. Malinovsky, S. Stich, and P. Richtarik. ProxSkip: Yes! Local gradient steps provably lead to communication acceleration! Finally! In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15750-15769. PMLR, 17-23 Jul 2022. +[36] M. Mohri, G. Sivek, and A. T. Suresh. Agnostic federated learning. In International Conference on Machine Learning, pages 4615-4625. PMLR, 2019. +[37] Y. Nesterov and V. Spokoiny. Random gradient-free minimization of convex functions. Found. Comput. Math., 17(2):527-566, 2017. +[38] J. Nguyen, K. Malik, H. Zhan, A. Yousefpour, M. Rabbat, M. Malek, and D. Huba. Federated learning with buffered asynchronous aggregation. CoRR, abs/2106.06639, 2021. +[39] M. Nouiehed, M. Sanjabi, T. Huang, J. D. Lee, and M. Razaviyayn. Solving a class of nonconvex min-max games using iterative first order methods. Advances in Neural Information Processing Systems, 32, 2019. +[40] M. Sanjabi, J. Ba, M. Razaviyayn, and J. D. Lee. On the convergence and robustness of training gans with regularized optimal transport. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. +[41] O. Shamir. Can we find near-approximately-stationary points of nonsmooth nonconvex functions? In OPT2020: 12th Annual Workshop on Optimization for Machine Learning, 2021. +[42] U. V. Shanbhag and F. Yousefian. Zeroth-order randomized block methods for constrained minimization of expectation-valued lipschitz continuous functions. In 2021 Seventh Indian Control Conference (ICC), pages 7-12. IEEE, 2021. +[43] P. Sharma, R. Panda, G. Joshi, and P. Varshney. Federated minimax optimization: Improved convergence analyses and algorithms. In International Conference on Machine Learning, pages 19683-19730. PMLR, 2022. +[44] A. Sinha, H. Namkoong, and J. Duchi. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018. +[45] V. A. Steklov. Sur les expressions asymptotiques decertaines fonctions définies par les équations différentielles du second ordre et leers applications au problème du développement d'une fonction arbitraire en séries procédant suivant les diverses fonctions. Comm. Charkov Math. Soc., 2(10):97-199, 1907. +[46] S. U. Stich. Local SGD converges fast and communicates little. In International Conference on Learning Representations, 2019. +[47] S. U. Stich and S. P. Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed updates. Journal of Machine Learning Research, 21:1-36, 2020. +[48] D. A. Tarzanagh, M. Li, C. Thrampoulidis, and S. Oymak. Fednest: Federated bilevel, minimax, and compositional optimization. In International Conference on Machine Learning, pages 21146-21179. PMLR, 2022. +[49] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91-108, 2005. +[50] J. Wang and G. Joshi. Cooperative sgd: A unified framework for the design and analysis of local-update sgd algorithms. Journal of Machine Learning Research, 22(213):1-50, 2021. +[51] Y. Xu and W. Yin. A globally convergent algorithm for nonconvex optimization based on block coordinate update. Journal of Scientific Computing, pages 1-35, 2017. + +[52] F. Yousefian, A. Nedic, and U. V. Shanbhag. On stochastic gradient and subgradient methods with adaptive steplength sequences. Automatica, 48(1):56-67, 2012. +[53] H. Yu, S. Yang, and S. Zhu. Parallel restarted SGD with faster convergence and less communication: demystifying why model averaging works for deep learning. In AAAI Conference on Artificial Intelligence, 2019. +[54] H. Yuan and T. Ma. Federated accelerated stochastic gradient descent. Advances in Neural Information Processing Systems, 33:5332-5344, 2020. +[55] X. Yuan and P. Li. On convergence of fedprox: Local dissimilarity invariant bounds, nonsmoothness and beyond. Advances in Neural Information Processing Systems, 35:10752-10765, 2022. +[56] J. Zhang, H. Lin, S. Jegelka, S. Sra, and A. Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11173-11182. PMLR, 13-18 Jul 2020. +[57] L. Zhang, D. Xu, S. Yuan, and X. Wu. FairGAN: Fairness-aware generative adversarial networks. In CoRR, 2018. +[58] F. Zhou and G. Cong. On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018. \ No newline at end of file diff --git a/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/images.zip b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4bb8d5c5be3195b5a5d842818a8ffca955b12e65 --- /dev/null +++ b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:278ddbd149b42dca4793f45c4ae85e5b08f3a0c0b4160e9da50684af9f015c71 +size 337550 diff --git a/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/layout.json b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..390c3a216d9e8a6badb2f56da5fc19fcd45629e6 --- /dev/null +++ b/zerothordermethodsfornondifferentiablenonconvexandhierarchicalfederatedoptimization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:835faa9c1a0833962a11dfaf2009b0c51b1dd094d03ed84d41575a996da589bf +size 726458 diff --git a/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_content_list.json b/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1c4974d99b8bb9a58c903ce00bb4e99bd9286021 --- /dev/null +++ b/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0ccf9d821038d925b3997efd397e27ddc225087cbd154bd8bfa7d2e2f3b06a8 +size 119189 diff --git a/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_model.json b/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_model.json new file mode 100644 index 0000000000000000000000000000000000000000..19a99e64aa43ae8180254b5bd5fc45b5ae560d75 --- /dev/null +++ b/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dba7766432304e5c54a4c83171c55aa09f976ed9eb02f4b6e0d44bdc95fbfbf0 +size 141923 diff --git a/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_origin.pdf b/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff9bfee8e50305b903c1bfc0aafabd7100cb1209 --- /dev/null +++ b/ziplminferenceawarestructuredpruningoflanguagemodels/a7bab0cd-dedf-450b-b7eb-294d819d0620_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:601ac9fe00fbe32fb6f3b79ff4fb6798b064cce17e6bdd4faf5ea01989be35ee +size 897867 diff --git a/ziplminferenceawarestructuredpruningoflanguagemodels/full.md b/ziplminferenceawarestructuredpruningoflanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0d115ce6b9e78efe270b725dbf9b6fa5f4a425c3 --- /dev/null +++ b/ziplminferenceawarestructuredpruningoflanguagemodels/full.md @@ -0,0 +1,427 @@ +# ZipLM: Inference-Aware Structured Pruning of Language Models + +Eldar Kurtic + +IST Austria + +eldar.kurtic@ist.ac.at + +Elias Frantar + +IST Austria + +alias.frantar@ist.ac.at + +Dan Alistarh + +IST Austria & Neural Magic dan. alistarh@ist.ac.at + +# Abstract + +The breakthrough performance of large language models (LLMs) comes with major computational footprints and high deployment costs. In this paper, we progress towards resolving this problem by proposing a novel structured compression approach for LLMs, called ZipLM. ZipLM achieves state-of-the-art accuracy-vs-speedup, while matching a set of desired target runtime speedups in any given inference environment. Specifically, given a model, a dataset, an inference environment, as well as a set of speedup targets, ZipLM iteratively identifies and removes components with the worst loss-routine trade-off. Unlike prior methods that specialize in either the post-training/one-shot or the gradual compression setting, and only for specific families of models such as BERT (encoder) or GPT (decoder), ZipLM produces state-of-the-art compressed models across all these settings. Furthermore, ZipLM achieves superior results for a fraction of the computational cost relative to prior distillation and pruning techniques, making it a cost-effective approach for generating an entire family of smaller, faster, and highly accurate models, guaranteed to meet the desired inference specifications. In particular, ZipLM outperforms all prior $\mathrm{BERT}_{\mathrm{base}}$ distillation and pruning techniques, such as CoFi, MiniLM, and TinyBERT. Moreover, it matches the performance of the heavily optimized MobileBERT model, obtained via extensive architecture search, by simply pruning the baseline $\mathrm{BERT}_{\mathrm{large}}$ model. When compressing GPT2, ZipLM outperforms DistilGPT2 while being $60\%$ smaller and $30\%$ faster. Our code is available at: https://github.com/IST-DASLab/ZipLM. + +# 1 Introduction + +The high accuracy of modern language models from the Transformer family [53] comes at the price of massive computational cost, which hinders their practical adoption in resource-constrained settings. This has motivated the development of model compression techniques, which can be categorized into pruning [17], quantization [9], and distillation [11]. In this paper, we focus on structural compression, whose goal is to reduce model size by removing entire sub-components, such as rows or columns from the model's weight matrices. The key advantage of structured pruning, relative to unstructured pruning of individual weights, is that the model can be reshaped to new dimensions, and the resulting computational savings can be leveraged on any hardware, without specialized computational support. At the same time, structured pruning introduces significant challenges. First, models are usually highly-sensitive to structured compression, and most methods require gradual compression, including retraining cycles designed to allow the model to recover accuracy. In addition, structural compression significantly complicates the use of knowledge distillation [16], which is usually done via manual or dynamic layer mapping [21, 59]. On the practical side, another challenge is that most existing techniques do not provide runtime speedup guarantees: the model is pruned to a fixed sparsity or FLOPS target, and then must be evaluated in the target inference environment. If the pruned model fails to meet the target inference specifications, the whole process must be repeated from scratch. + +Overview. In this paper, we resolve these issues and provide a novel structured pruning approach called ZipLM, which achieves state-of-the-art performance, both in the post-training/one-shot setting, where retraining is not desirable, as well as in the popular gradual compression setting, where retraining is possible. We accomplish this via an inference-aware algorithm, which successfully balances the loss-routine trade-off at each pruning step. By taking runtime into account, we avoid removing components that do not bring significant speedup gains. Additionally, our algorithm provides speedup guarantees for compressed models, a highly-desirable property in practical applications. + +We summarize our contributions as follows: + +- We introduce a novel structured pruning approach, which unifies the saliency criteria investigated by prior work-weight magnitude, activation impact, and removal of linearly-redundant structures, while considering local (layer-wise) and global correlations. We augment it to be inference-aware, ensuring desired latency or throughput in any given configuration. +- We complement the algorithm with a novel layer-wise token-level distillation, which consistently boosts accuracy on small datasets and does not require manual layer matching, circumventing a limitation of prior structured pruning techniques. +- ZipLM is the first structured pruning approach that achieves state-of-the-art results for both, post-training/one-shot compression and gradual pruning settings, while being applicable to both, BERT (encoder) and GPT (decoder) language models, without any modifications. +- ZipLM is practical and efficient. For a set of desired speedups (e.g. 2x, 5x, 10x) in the target inference environment (e.g. batch-size=128, sequence-length=384, device=V100), in a single run and under the same set of hyper-parameters, it produces the entire family of compressed models, one for each speedup target. Consequently, it leads to state-of-the-art results in GPU-based inference environments. Moreover, it is compatible with unstructured pruning and quantization, leading to state-of-the-art results even for CPU-based environments. + +# 2 Related Work + +Distillation-based compression methods focus on training a smaller student model to mimic the representations of a larger teacher model. The "distance" between the representations of student and teacher is often architecture-specific. MiniLM [55] uses a deep self-attention mechanism to replicate the attention mechanism of the teacher, and TinyBERT [21] employs a bespoke distillation mechanism, for a manually-picked subset of layers. Both methods offer a very strong baseline, generally outperforming other approaches, except for MobileBERT. MobileBERT [51] involves first training a custom large BERT teacher model from scratch, and then deviates from the standard architecture [2] by introducing heavily-optimized components with reduced latency, whose combinations are decided in neural architecture search (NAS)-like fashion. It achieves strong results in terms of accuracy-per-parameter, at the cost of significant computational costs in the search process. DistilBERT and DistilGPT2 [44] involve training a fixed student obtained by removing every other layer from the teacher, while BERT-PKD [50] employs incremental knowledge extraction through the distillation of intermediate layers. Well-Read-Students [52] reduces the size of the standard BERT architecture through principled downscaling of internal dimensions. DynaBERT [18], on the other hand, distills knowledge to a student model that is both depth- and width-adaptive. + +Structural pruning methods usually start from a large pre-trained model, and iteratively reduce the dimensions of weight matrices. Block Movement Pruning [27] identifies and removes redundant rectangular blocks of weights while following the movement pruning intuition [45] that weights moving towards zero during fine-tuning should be removed. FLOP [56] and Low-Rank [40] use matrix decomposition techniques to progressively remove rank-1 components from factorized weight matrices during training. BERT-of-Theseus [60] employs a similar approach, but replaces entire submodules with smaller counterparts. Methods like LayerDrop [3] and Poor Man's BERT [43] address structured compression through various layer-dropping techniques. LayerDrop uses structured layer-dropout regularization to train a model resilient to sub-network selection during inference, while Poor Man's BERT explores a wide range of layer-dropping strategies. The recent CoFi method [59] employs masks of different granularities to jointly prune coarse and fine-grained submodules during fine-tuning, combined with an optional customized distillation technique. CoFi is the state-of-the-art structural pruning method; relative to distillation methods, CoFi outperforms MiniLM and TinyBERT, but not MobileBERT, in terms of accuracy-vs-speedup. + +Other compression methods such as the ones that exploit dynamic forms of sparsity which appear at runtime [35], or the ones that utilize lower bit-width representation of weights and/or activations [6, 32] are complementary to our approach. We demonstrate this in Section 5 where we apply quantization to obtain even higher compression ratios for edge deployment environments like commodity CPUs. + +# 3 Method + +Removing large structures like entire matrix columns or attention heads from a language model quickly leads to severe accuracy degradation, from which it is often difficult to recover even with extensive finetuning. This is why current state-of-the-art approaches like Block Movement Pruning [27] or CoFi [59] opt for integrating pruning directly into training (via sampling or differentiable approximations), rather than performing it in the standard gradual pruning fashion of discrete steps with finetuning in between. However, as we will show, by designing a new highly accurate pruning algorithm which is able to account for both local correlations of structures within single layers as well as global correlations across layers, we can actually apply the gradual pruning paradigm, with all its advantages, to improve significantly over the current state-of-the-art. + +# 3.1 The ZipLM Structured Pruning Algorithm (Local Correlations) + +Most existing structured pruning criteria [33, 31] are based on one or two of the following assumptions about saliency: structures with lower (average) weight magnitude are easier to prune [15, 29], structures with small input activations can be removed at little loss [34], and structures that are close to a linear combination of other structures are the most redundant [14, 49]. We will now show how all these aspects can be jointly considered in a principled manner via our new ZipLM technique. + +Problem formulation. Our approach starts from the idea of applying structured compression layerwise, in a way that allows the layer to preserve most of its output characteristics. This setup is popular in the post-training quantization and unstructured pruning literature [38, 19, 8], and can be implemented as follows. We are given a small amount of calibration data, which we run through the network, to obtain "reference" inputs and outputs for each layer. Then, for each layer, given the calibration inputs $\mathbf{X}$ and the original layer weights $\mathbf{W}$ , we aim to find compressed weights $\widehat{\mathbf{W}}$ respecting the compression constraint $\mathcal{C}$ , which best approximate the original output, measured via the squared error metric. If we assume that the input and weight matrices have an appropriate rectangular form, the problem can be formalized as: + +$$ +\operatorname {a r g m i n} _ {\widehat {\mathbf {W}}} \left\| \widehat {\mathbf {W}} \mathbf {X} - \mathbf {W} \mathbf {X} \right\| _ {2} ^ {2} \quad \text {s u b j e c t t o} \quad \widehat {\mathbf {W}} \in \mathcal {C}. \tag {1} +$$ + +This objective can be decomposed across the rows of $\mathbf{W}$ , leading to a set of sparse linear regression problems, one per row. These row-wise problems are independent, which forms the basis of related work [8]; yet, since we do structured pruning, they become dependent, as we would like to prune the same weight indices across all rows, i.e. prune entire columns. Thus, finding the optimal weights $\widehat{\mathbf{W}} \in \mathcal{C}$ is equivalent to finding: 1) the optimal structure $\mathbf{S}$ of the desired shape to be removed, which we assume to be applied across all rows, with corresponding pruning mask $\mathbf{M}_{\mathbf{S}}$ , where pruned indices have value 1 in the mask, and others are 0; and 2) the corresponding update $\delta_{S}$ to all of the remaining weights, optimally compensating for the error caused by the removal of weights in $\mathbf{S}$ . + +Saliency scores and weight update. Let $\mathbf{H} = \mathbf{X}\mathbf{X}^{\top}$ be the Hessian matrix for the $\ell_2$ -minimization problem in Equation 1, which is independent of the weights. Define $\mathbf{W}_{i,\mathbf{M_S}}$ to be the subset of weights under the mask $\mathbf{M_S}$ in row $i$ , and by $(\mathbf{H}^{-1})_{\mathbf{M_S},\mathbf{M_S}}$ the submatrix of the inverse Hessian corresponding to the entries under the mask $\mathbf{M_S}$ . Then, we can obtain the optimal mask and weight update as follows: + +$$ +\operatorname {a r g m i n} _ {\mathbf {S}} \sum_ {i = 0} ^ {d _ {\text {r o w}}} \mathbf {W} _ {i, \mathbf {M} _ {\mathbf {S}}} \cdot \left(\left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} _ {\mathbf {S}}, \mathbf {M} _ {\mathbf {S}}}\right) ^ {- 1} \cdot \mathbf {W} _ {i, \mathbf {M} _ {\mathbf {S}}} ^ {\top} \tag {2} +$$ + +$$ +\boldsymbol {\delta} _ {S} = - \mathbf {W} _ {:, \mathbf {M} \mathbf {s}} \cdot \left(\left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} \mathbf {s}, \mathbf {M} \mathbf {s}}\right) ^ {- 1} \cdot \left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} \mathbf {s},:} \tag {3} +$$ + +We obtain this by extending the Optimal Brain Surgeon [13, 24] formulas for solving Equation 1 to cover all $d_{\mathrm{row}}$ weight matrix rows simultaneously. Importantly, the subselection of the inverse + +Hessian $((\mathbf{H}^{-1})_{\mathbf{M}_{\mathbf{S}},\mathbf{M}_{\mathbf{S}}})^{-1}$ is shared between all rows. Further, since we generally consider only non-overlapping sets $\mathbf{S}$ of the same size, we pay just $O(d_{\mathrm{col}}\cdot |\mathbf{M}_{\mathbf{S}}|^{2})$ total cost for all extra inversions. Since the number of structures in the mask $|\mathbf{M}_{\mathbf{S}}|$ is usually small, e.g. attention heads usually consist of 64 columns, the overall cost of these inversions is low. + +Simply selecting the structures to prune according to the criterion in Equation 2 unifies the weight magnitude and activation influence criteria (via the Hessian), but still ignores any correlations between structures. We address this by pruning structures one-at-a-time, while always applying update $\delta_{S}$ and fully recomputing $\mathbf{H}^{-1}$ relative to the remaining structures. For example, if there exist two redundant structures $S_{1}$ and $S_{2}$ , we will first drop $S_{1}$ and update $S_{2}$ to compensate for this removal, at which point $S_{2}$ is no longer easy to prune. Without this one-at-a-time removal, both structures would have been incorrectly removed as they each individually seem easy to prune according to Equation 2. Executing this strategy naively will require a full $O(d_{\mathrm{col}}^3)$ recomputation of the inverse Hessian relative to the remaining structures at each step, which would be very slow. However, this can be avoided by removing the rows and columns corresponding to $\mathbf{M}_{\mathbf{S}}$ directly in the inverse with one step of Gaussian elimination [8], applied block-wise to cover larger structures, as follows: + +$$ +\mathbf {H} ^ {- 1} - \mathbf {H} _ {:, \mathbf {M} _ {\mathbf {S}}} ^ {- 1} \cdot \left(\left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} _ {\mathbf {S}}, \mathbf {M} _ {\mathbf {S}}}\right) ^ {- 1} \cdot \mathbf {H} _ {\mathbf {M} _ {\mathbf {S}},:} ^ {- 1}, \tag {4} +$$ + +which takes only $O(|\mathbf{M}_{\mathbf{S}}| \cdot d_{\mathrm{col}}^2)$ time. We provide complete pseudocode in Algorithm 1. + +Algorithm 1 The ZipLM pruning algorithm. Given inverse Hessian $\mathbf{H}^{-1} = (2\mathbf{X}\mathbf{X}^{\top} + \lambda \mathbf{I})^{-1}$ , we remove exactly $k$ structures from the corresponding weight matrix $\mathbf{W}$ . + +$\mathbf{R}\gets$ set of all possible structures + +for $k$ times do + +$$ +\mathbf {S} \leftarrow \operatorname {a r g m i n} _ {\mathbf {S}} \sum_ {i = 0} ^ {d _ {\mathrm {r o w}}} \mathbf {W} _ {i, \mathbf {M} _ {\mathbf {S}}} \cdot \left(\left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} _ {\mathbf {S}}, \mathbf {M} _ {\mathbf {S}}}\right) ^ {- 1} \cdot \mathbf {W} _ {i, \mathbf {M} _ {\mathbf {S}}} ^ {\top} +$$ + +$$ +\boldsymbol {\delta} _ {S} \leftarrow - \mathbf {W} _ {:, \mathbf {M} _ {S}} \cdot \left(\left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} _ {S}, \mathbf {M} _ {S}}\right) ^ {- 1} \cdot \left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} _ {S};} +$$ + +$$ +\mathbf {W} \leftarrow \mathbf {W} + \delta_ {S} +$$ + +$$ +\mathbf {H} ^ {- 1} \leftarrow \mathbf {H} ^ {- 1} - \mathbf {H} _ {:, \mathbf {M} _ {\mathbf {S}}} ^ {- 1} \cdot \left(\left(\mathbf {H} ^ {- 1}\right) _ {\mathbf {M} _ {\mathbf {S}}, \mathbf {M} _ {\mathbf {S}}}\right) ^ {- 1} \cdot \mathbf {H} _ {\mathbf {M} _ {\mathbf {S}},:} ^ {- 1} +$$ + +$$ +\mathbf {R} \leftarrow \mathbf {R} - \{\mathbf {S} \} +$$ + +end for + +$\mathbf{W}\leftarrow \mathbf{W}\odot \mathbf{M}_{\mathbf{R}}$ + +We utilize the fact that the values corresponding to pruned weights in $\mathbf{W}$ and in the inverse Hessian $\mathbf{H}^{-1}$ do not affect any subsequent calculations and can therefore be ignored even if they are not exactly zero. However, in the end we have to prune them explicitly again by multiplying with the overall mask to ensure that they are exactly zero. In a practical implementation, $(\mathrm{(H^{-1})_{Ms,M_s})^{-1}}$ should only be computed once and reused when computing the corresponding sum across all rows. + +Pruned structures. Focusing on Transformers, we consider three types of structural removal: dropping attention heads, shrinking the expanded intermediate dimension of the fully-connected network (FC) layers, and removing entire residual parts, i.e. attention or FC-modules. We implement this by dropping $d_{\mathrm{head}}$ consecutive columns in the out-matrix of the attention block and individual columns in the second linear layer of the feed-forward network. Once these column-structures are zeroed out, corresponding rows in previous layers can be safely removed without any output change. Crucially, by pruning e.g. columns in the FC2 layer rather than equivalent rows in FC1, we can utilize the input correlations via Hessian-information using the ZipLM pruner. + +Novelty relative to existing Optimal Brain Surgeon (OBS) approaches. The original framework [13], as well as modern efficient versions [47, 7, 8], have been explicitly developed for unstructured pruning, i.e. removing individual weights. It is nontrivial to extend them to structured pruning, as this involves considering additional correlations, both within as well as across multiple blocks (such blocks are usually employed for computational tractability). For example, the state-of-the-art layer-wise approach of [8], performs unstructured pruning by handling weight matrix rows separately, and then greedily merging results. In contrast, we perform structured pruning jointly across multiple rows, which is not only necessary for correctness but additionally enables us to design an algorithm with a computational complexity that is lower by a full factor of the hidden dimension size. Additionally, structured pruning requires explicitly matching matrix shapes for consecutive layers and a dedicated strategy for utilizing weight updates even when entire blocks/rows are pruned. + +# 3.2 Inference-Aware Structured Pruning (Global Correlations) + +We now describe how to augment the algorithm to be inference-aware, in the sense that it accepts inference specifications, such as batch-size, sequence-length, and speedup on the target hardware, as additional inputs to optimize for. + +Motivation. The main benefit of inference-aware structured pruning is the fact that pruning decisions are not guided purely by saliency scores, but instead by loss-vs-speedup trade-offs associated with the removal of each component in the model. Prior methods, e.g. [24, 27, 59] focus solely on pruning until a specific sparsity threshold is reached, without taking into account the real-world speedups corresponding to the compression threshold, which can vary significantly between settings. For example, a $95\%$ sparse BERT produced by CoFi [59] has $12x$ speedup on a V100 GPU, but only $5x$ on an A100 GPU. With existing methods, if real-world timings fail to meet the inference requirements, the entire process has to be repeated with different sparsity values until the target speedup is achieved, which is both time-consuming and error-prone. An additional advantage of inference-awareness, which we showcase in our GPT experiments in Section 4, is that it enables optimizing for different real-world metrics, such as latency or throughput. + +Runtime awareness. We integrate runtime constraints via a latency table [1] for our target inference environment, where we record the time to run an attention block, including all overheads, with $0, \dots, N_{\mathrm{heads}} - 1$ heads pruned and similarly for the fully-connected block with the intermediate dimension shrunk by a factor of $0.9^i$ , for $i = 0, \dots, 42$ ; in relative steps of $10\%$ up until $\approx 99\%$ sparsity, following [5]. This allows rapid runtime estimation for different per-layer sparsity configurations. We provide an example of our latency table in Appendix E. + +Finding the optimal sparsity configuration. Ultimately, our goal is to find a per-layer-sparsity configuration that satisfies a certain speedup-constraint while maximizing accuracy. A popular paradigm of doing this [15, 30] is to produce a large number of pruned models with different sparsity distributions across layers and then select the one, satisfying a target constraint, with the highest accuracy. To make this computationally feasible, it is crucial that pruning is cheap, yet accurate. ZipLM treats each layer independently, which makes it possible to precompute a database of several pruned versions with different sparsities for each layer. The entire database can be produced in a single run, utilizing the algorithm's one-at-a-time nature. While our algorithm is compatible with various search methods for finding layer-wise profiles [15, 12], we adapt the recent S&PY approach [5]. + +Structured SPY search. The SPY approach is designed for unstructured pruning and assigns a quadratic prior to per-layer sensitivity of different sparsity levels. This is not valid in our structured pruning scenario, since for instance it would suggest that dropping a full layer is only slightly more difficult than pruning it to $99\%$ sparsity. Thus, using standard SPY would lead the algorithm to explore a large number of sub-optimal configurations, significantly wasting computational resources. To alleviate this problem, for a structured sparsity $s$ , we introduce a better prior $p_s$ as the relative layer-wise squared error incurred by pruning, defined as $p_s = ||\widehat{\mathbf{W}}_s\mathbf{X} - \mathbf{W}\mathbf{X}||_2 / ||\mathbf{W}\mathbf{X}||_2$ , which simply has a value of 1 for a fully dropped layer. Furthermore, the original SPY approach uses shrinking neighborhood search, which has high variance in both runtime and solution quality for structured compression. Therefore, we perform a fixed number of 1000 steps, randomly mutating in expectation $10\%$ of the layer-wise sensitivity coefficients. Finally, we note that any candidate evaluated by this procedure actually achieves the target speedup, leading to significantly decreased search time. We validate our approach in Appendix F, where we demonstrate that our speedup estimations are indeed very accurate in practice. Specifically, real-world on-device measurements deviate at most by $5.28\%$ from their expected values. + +# 3.3 Layer-wise Token Distillation + +For structured pruning, it is common to apply layer-wise distillation objectives to transfer intermediate representations. However, structured pruning creates compatibility issues relative to the fixed teacher architecture, leading most methods to develop customized distillation strategies. A popular approach, introduced in [21] and improved by [59], solves the problem via static [21] or dynamic [59] mapping of a subset of teacher layers to a subset of student layers. Their main limitation is manual layer selection, where making the "optimal" choice would require evaluating all possible combinations, which can be very expensive. Another limitation is shape-matching between intermediate layers, which is solved by introducing a learnable linear transformation matrix attached to student outputs. + +Our approach. We address these challenges differently, by leveraging the fact that ZipLM preserves the hidden dimension size, and propose to use distillation of intermediate token representations across the entire model. The resulting minimization objective consists of three components: + +$$ +\mathcal {L} \left(\theta^ {\mathrm {s}}, \theta^ {\mathrm {t}} | x\right) = \lambda_ {1} \mathcal {L} _ {\text {t a s k}} \left(\theta^ {\mathrm {s}} | x\right) + \lambda_ {2} \mathcal {L} _ {\text {l o g i t}} \left(\theta^ {\mathrm {s}}, \theta^ {\mathrm {t}} | x\right) + \lambda_ {3} \mathcal {L} _ {\text {t o k e n}} \left(\theta^ {\mathrm {s}}, \theta^ {\mathrm {t}} | x\right), \tag {5} +$$ + +where $\theta^{\mathrm{s}}$ and $\theta^{\mathrm{t}}$ represent student and teacher models respectively, $x$ are the inputs, $\mathcal{L}_{\mathrm{task}}$ is the loss associated with the task (e.g. cross-entropy for text-classification), $\mathcal{L}_{\mathrm{logit}}$ is the KL-divergence between output logits as described in [16], and $\mathcal{L}_{\mathrm{token}}$ is our token-level distillation loss. Hidden tensors passed between consecutive transformer layers are of constant shape $\mathbf{H} \in \mathbb{R}^{B \times seq \times H}$ , where $B$ stands for the batch-size, seq for the sequence length, and $H$ for the hidden size defined by the model architecture. This tensor can be interpreted as a collection of $B \times seq$ vectors $\mathbf{h} \in \mathbb{R}^{H}$ each carrying intermediate model representations of input tokens $x$ . We define the loss $\mathcal{L}_{\mathrm{token}}$ as an Euclidean distance $\Delta$ between vectors $\mathbf{h}$ corresponding to each non-padded token in the input sequence, averaged over all unpruned layers. Formally, for a layer $k$ , it is defined as + +$$ +\mathcal {L} _ {\mathrm {t o k e n}} ^ {k} = \frac {1}{\sum_ {j = 1} ^ {B \times s e q} \mathbb {1} [ j \notin \mathbf {P} ]} \sum_ {j = 1} ^ {B \times s e q} \mathbb {1} [ j \notin \mathbf {P} ] \cdot \Delta \left(\mathbf {h} ^ {\theta_ {\mathrm {s}}}, \mathbf {h} ^ {\theta_ {\mathrm {t}}}\right), \tag {6} +$$ + +where $\mathbf{P}$ stands for the set of padding tokens. This formulation encourages the student model to generate vector representations for each token that are similar to those produced by the teacher model. In Appendix B, we present ablation studies and comparisons for ZipLM and CoFi, with and without their respective distillation objectives. + +# 4 Experiments + +Setup. Given a pre-trained model, a dataset, and a set of desired speedups in a target inference environment, we iteratively fine-tune and prune the model in a structured way such that in the end we obtain a set of accurate compressed models, one for each speedup target. We consider pruning of the standard $\mathrm{BERT}_{\mathrm{base}}$ and $\mathrm{BERT}_{\mathrm{large}}$ architectures, evaluating on dev-sets of established benchmarks: SQuADv1.1 [42], and a subset of GLUE [54] tasks: SST-2 [48], QNLI [54], MNLI [57], and QQP [46], selected to match publicly-available checkpoints from prior work. For a precise comparison to prior work [59], our inference environment is a single NVIDIA V100 16GB GPU, batch size of 128, and sequence lengths of 384 and 128 for SQuAD and GLUE tasks, respectively. In addition to encoder-based BERT models, we also consider pruning of the decoder-based GPT2 model on the OpenTextCorpus [10], for which we consider two inference environments: pruning for throughput (batch-size=16, sequence-length=1024), and pruning for latency (batch-size=1, a set of prompts with varying lengths). For illustration, our pipeline is depicted in Figure 1. In Appendix H and I, we report exact values for all results, as well as hyper-parameters for reproducibility. + +![](images/6fda0de858d1bd38ad30dce3dae9f7bfc625e198cb943fc631491ec32316a4b1.jpg) +Figure 1: Illustration of the ZipLM pipeline: 1) inference specifications, 2) runtime benchmarking of candidates for pruning, 3) gradual structured pruning until all speedup targets are met. + +Baselines. In the gradual pruning setting, we explore the performance of ZipLM pruning of BERT and GPT2-family models, across a wide range of inference speedup targets, ranging from 2x to 15x, in unit increments. This allows us to compare the effectiveness of our approach against a diverse set of structured pruning and distillation-based techniques, including state-of-the-art CoFi pruning, competitive Block Movement Pruning, and distillation approaches including TinyBERT, DistilBERT, + +DistilGPT2, MobileBERT, MiniLM, and DynaBERT. Additionally, we include comparisons with other relevant methods. For fairness, we follow [59] and report TinyBERT and DynaBERT results without data augmentations. In the post-training/one-shot setting, which does not allow retraining, we demonstrate that ZipLM outperforms the prior state-of-the-art approach of [26]. We evaluate inference speedups of all models in the same environment, unless the models are not publicly available, in which case we report speedups from their respective papers. We refer to ZipLM compressed BERT models as ZipBERT, and to ZipLM compressed GPT2 models as ZipGPT2. + +# 4.1 Gradual Structured Pruning + +$\mathrm{BERT}_{\mathrm{base}}$ results. In Figure 2 we compare structured compression methods on the SQuADv1.1 task. ZipLM outperforms both CoFi and TinyBERT, prior state-of-the-art techniques, by 3 points in the F1 score at the same speedup factor, while at the same F1 score it is able to improve inference speedups by at least $60\%$ . In Figure 3, we extend this comparison to a subset of GLUE tasks and provide an exhaustive overview of various structured compression techniques. Results on the other four remaining GLUE tasks are provided in Appendix Figure 7. As can be observed, distillation-based methods usually provide either one or a few structurally-compressed models, due to the massive costs associated with training from scratch for each new model. Relative to the most competitive approaches, such as TinyBERT, CoFi, and MiniLM, ZipLM provides consistent improvements in terms of both accuracy and speedup, while providing guarantees for each compressed model in terms of the expected speedup in the target inference environment. Interestingly, on tasks like QQP and SST-2, ZipLM is able to compress the $\mathrm{BERT}_{\mathrm{base}}$ model up to 6x and 10x speedups, respectively, while maintaining the accuracy of the dense model. In Appendix D, we provide additional comparisons against CoFi on test-set results from the official GLUE evaluation server. + +![](images/ac6c0532e6e9dc7cac3909f985a36f34f55bad3b680b50c1976c692eed46cb2f.jpg) +Figure 2: Structured compression of $\mathrm{BERT}_{\mathrm{base}}$ (left) and $\mathrm{BERT}_{\mathrm{large}}$ (right) on the SQuADv1.1 task. Dashed horizontal lines represent full and $99\%$ accuracy recovery of the uncompressed model. + +![](images/a00b5fcc0f7d84b031046e79cd6ce17ef45ce694e07caf11b25a476edc81239e.jpg) + +![](images/7332c72cd1fb1acf30fe5c6138b2b1987fb8c2cc6ec746278cbb00fc5eb58fc4.jpg) +Figure 3: Structured compression of $\mathrm{BERT}_{\mathrm{base}}$ on QNLI, MNLI, SST-2, and QQP tasks. Dashed horizontal lines represent full and $99\%$ accuracy recovery of the uncompressed model. + +![](images/526e468fc2a6381e8419710a056094ea7f2eb85172b905f18b556b5879942b91.jpg) + +![](images/8f4fec798b18b2e33d2d88219421987c6d300b0047d433a374cec9c830523d85.jpg) + +![](images/435aee85de64694009a73d2c3a1589df1bce6c6968e2eb6b1628b2a528007a9c.jpg) + +$\mathrm{BERT}_{\mathrm{large}}$ results. To verify that our approach does not pertain only to the $\mathrm{BERT}_{\mathrm{base}}$ model, we apply ZipLM structured pruning to the $3\mathrm{x}$ larger $\mathrm{BERT}_{\mathrm{large}}$ model on the SQuADv1 task. In this setup, we compare against the only two approaches that attempted to structurally compress this larger model, Block Movement Pruning and distillation-based MobileBERT. As can be seen in Figure 2, ZipLM is able to compress $\mathrm{BERT}_{\mathrm{large}}$ up to $4\mathrm{x}$ faster inference while maintaining the F1 score of the uncompressed model. At the same F1 score as the fastest Block Movement Pruning model (3x), ZipLM doubles the inference speedup (6x). A result worth emphasizing is that ZipLM is even able to match the performance of the highly optimized MobileBERT model by simply compressing the baseline BERT architecture, without the many additional optimizations and custom components + +used by MobileBERT. Specifically, some of the module- and operator-level optimizations used by MobileBERT include: bottleneck structures and carefully-balanced self-attention and feed-forward modules, embedding layer factorization, a bespoke closed-source teacher model, replacement of LayerNorm layers with lower-latency NoNorm layers, and replacement of GELU activation functions with ReLU activations. + +99% recovery. The MLPerf Benchmark [36] targets recovery of $>99\%$ of the baseline accuracy. At this industry-defined threshold, ZipLM models set new state-of-the-art performance across all of the considered datasets with the following $\mathrm{BERT}_{\mathrm{base}}$ inference speedups: 5x on the SQuADv1 task, 6x on QNLI and MNLI, and, surprisingly, 13x and 15x on SST-2 and QQP, respectively. When compressing $\mathrm{BERT}_{\mathrm{large}}$ on the SQuADv1 task, ZipLM produces a 6x faster model at $99\%$ recovery. + +GPT2 results. To validate that our approach does not only apply to encoder-based models, we apply ZipLM structured pruning to the decoder-based GPT2 model. In addition to this, to further demonstrate the inference-awareness property of our approach and its importance for real-world applications, we consider two different regimes: pruning for throughput and pruning for latency. An example application for the former regime is a server-side deployment where the model processes many queries at the same time, while an application for the latter regime is a text-generation scenario where the model is used in an online fashion to auto-complete user's text. + +For a fair comparison, we follow the DistilGPT2 setup [44] and prune the 124M parameters GPT2 variant on the OpenTextCorpus dataset, followed by zero-shot evaluations, without any fine-tuning, on the test-split of the WikiText [37] dataset. Because of the enormous vocabulary size, the maximum achievable speedup in the throughput regime for this model is roughly $3.5\mathrm{x}$ . Thus, we run ZipLM pruning to $1.5\mathrm{x}$ , $2\mathrm{x}$ , $2.5\mathrm{x}$ , and $3\mathrm{x}$ speedup targets. For the latency regime, we report the median time to process sequences of various lengths when generating text with Top-K sampling [4]. In Table 1, we present zero-shot evaluations of the uncompressed GPT2 model which serves as a baseline relative to the competing DistilGPT2 approach, and four variants of our ZipLM pruned GPT2. In the pruning for throughput scenario, at similar speedup and decoder size (1.6x-vs-1.5x and 42.5M-vs-47.3M), ZipGPT2 achieves significantly lower perplexities relative to DistilGPT2. Further, at slightly better (lower) perplexities, ZipGPT2 reduces the decoder size from 42.5M to only 26.5M parameters (60% reduction) and improves speedup from 1.6x to 2.1x (30% faster). In the pruning for latency scenario, at a similar speedup of 1.9x-vs-2.0x, ZipGPT2 reduces the decoder size by 3M params while providing almost 2 points improvement in the zero-shot perplexity. + +Table 1: Zero-Shot perplexity (PPL) of compressed GPT2 in two regimes: pruning for throughput and pruning for latency. *GPT2 was trained by OpenAI [41] on a much larger closed-source dataset and for significantly longer. The only direct comparison is between DistilGPT2 and ZipGPT2. + +
ModelPruning for throughputPruning for latency
SpeedupDecoder sizeWiki Text-103 PPL ↓SpeedupDecoder sizeWiki Text-103 PPL ↓
GPT2*1.0x85.0M28.51.0x85.0M28.5
DistilGPT21.6x42.5M43.01.9x42.5M43.0
ZipGPT2 (ours)1.5x47.3M35.41.6x48.7M37.8
2.1x26.5M41.52.0x39.2M41.2
2.7x14.0M50.42.2x26.6M49.0
3.3x5.7M72.12.5x20.7M55.0
+ +Table 2: One-shot (posttraining) structured pruning of $\mathrm{BERT}_{\mathrm{base}}$ on three downstream datasets and two speedup targets. + +
SpeedupKwon et al. [26]ZipBERT base
SQuAD, F1
1.5x86.287.1
2.0x76.584.1
QQP, acc.
1.5x89.589.7
2.0x83.984.8
MNLI, acc.
1.5x82.883.0
2.0x78.178.2
+ +# 4.2 On the Importance of Inference-Awareness + +Depth vs. width pruning. A particularly interesting illustration of the importance of inference-awareness in the pruning algorithm is given by our GPT2 models running directly in the PyTorch-HuggingFace framework, which can be used in two different modes: batch-prediction (throughput-constrained) and text-generation (latency-constrained). For the former, inputs are typically large, and shrinking weight matrices is an effective way to achieve speedups. However, for the latter, the inputs are much smaller, and the size of weight matrices is no longer the primary bottleneck. + +In this scenario, the only way to achieve substantial speedups is to completely drop some modules, which prior methods cannot account for as they solely optimize for overall model sparsity. However, with ZipLM, runtime measurements from the target inference environment guide pruning decisions, allowing it to learn the best way to compress the model for an optimal speedup-accuracy trade-off. Our GPT2 compression results in Table 1 clearly illustrate and support these statements. Even though pruned for the same speedup target, the final architectures of ZipGPT2 models are drastically different. For the throughput-constrained scenario, the model's depth was preserved but the matrix dimensions were significantly reduced (roughly by a factor of 10) making the corresponding multiplications with large input tensors much faster. In contrast, for the latency-constrained scenario, the model's width (shapes of weight matrices) was mostly preserved but the depth was shrunk almost by a factor of 4, making the forward pass with small inputs faster by reducing the effective number of modules. + +Inference device capabilities. Incorporating capabilities of the inference device is another important aspect for effective structured pruning which prior methods do not account for as they solely optimize for higher sparsities. As noted in Section 3.2, this reflects in larges discrepancies between speedups obtained on different devices, e.g. a compressed model with $12\mathrm{x}$ speedup on a V100 is only $5\mathrm{x}$ faster on an A100 GPU. This arises because the A100 GPU is significantly more powerful and thus faster on the dense model; at the same time, it is highly underutilized for small matrices, which significantly limits the speedups for very high sparsity. To illustrate this, we have measured the speedup from reducing the MLP size for both GPU types (see Table 3). As can be seen, pruning to $\approx 90\%$ sparsity $(3072\rightarrow 302)$ gives $\approx 7\mathrm{x}$ speedup on a V100 but only $\approx 3\mathrm{x}$ speedup on an A100. Such differences are automatically captured by ZipLM, where pruning for sparsity is replaced by pruning for speedup. + +Pruning for speedup vs. pruning for sparsity. In Figure 4 we compare results with ZipLM pruning when the target for pruning is sparsity (like prior approaches) and when the target for pruning is speedup (the ZipLM approach). Pruning for speedup brings significant improvements, up to 10 points, especially at higher speedups where inference-awareness is very important as the algorithm does not remove components that do not bring any further speed and therefore helps preserving accuracy. + +![](images/e75109ff524711e966ec742f0bf134d9d6e97017d77296df5f8da9ec9ce4aa52.jpg) +Figure 4: Ablation study for the impact of the pruning target: pruning for sparsity (like prior approaches) versus pruning for speedup (the ZipLM approach). + +Table 3: Speedups from shrinking the intermediate size of MLPs in the FFN section of a Transformer layer, on different GPUs. + +
MLP sizeSpeedup
V100A100
30721.0x1.0x
18141.6x1.1x
13222.0x1.4x
3026.9x3.1x
13011.8x4.4x
7613.1x4.4x
3314.8x4.4x
+ +# 4.3 Post-training/One-shot Structured Pruning + +We now study the performance of ZipLM when applied purely in one-shot, without any retraining. In this setting, we compare against the state-of-the-art method of Kwon et al. [26] which combines several heuristics: Fisher-based mask search, mask rearrangement, and mask tuning. Instead of heuristics, our pruning framework utilizes direct end-to-end loss information to find the optimal sparsity configuration. During the warm-start phase, [26] utilizes a diagonal Fisher matrix to estimate the significance of heads and filters, which discards correlations caused by off-diagonal elements. Although the approach attempts to address this limitation by approximating correlations within a single layer, it will not capture global dependencies. Furthermore, the weights are adapted for layerwise reconstruction at the very end of the compression step, whereas our method does it continuously during the pruning (please see Section 4 for the significance of doing this). For a fair comparison, we apply the authors' own implementation in latency-constrained mode on the exact same model weights. Table 2 presents results on several datasets and speedups, showing that ZipLM is even more accurate than the approach designed and optimized specifically for the post-training/one-shot pruning. + +Sensitivity to calibration data. Additionally, we have found that ZipLM is very robust to the amount of calibration data. In Table 4 we present a sensitivity analysis with respect to the number of calibration samples. We one-shot prune $\mathrm{BERT}_{\mathrm{base}}$ on the SQuADv1.1 task for two speedup targets: $1.5\mathrm{x}$ and $2.0\mathrm{x}$ . In this setup, we compare results against Kwon et al. [26], which uses 2048 samples by default. As can be seen from the table, ZipLM outperforms prior state-of-the-art starting at only 32 samples. As we increase the number of samples, the results improve, up to 2 points in F1 score. + +# 5 Discussion and Extensions + +CPU as an LLM-inference environment. In Section 4 we have focused on various GPU-based inference environments as it enabled us to conduct fair comparisons against prior structural compression techniques. However, CPUs present another compelling inference environment focused on edge deployment of LLMs. Therefore, we target the recently proposed compound compression pipeline of [24], which involves three steps: structured pruning, unstructured pruning, and quantization. We replace their structured pruning approach based on layer dropping with ZipLM. As a result, at full accuracy recovery, we are able to improve speedup from $3\mathrm{x}$ to $13\mathrm{x}$ , and at the largest compression ratio from $30\mathrm{x}$ to $50\mathrm{x}$ . Due to space constraints, we provide full results in Appendix A. + +Computational efficiency. Relative to distillation-based methods, structured pruning is an order of magnitude more efficient in terms of GPU hours due to the massive costs associated with pretraining from scratch for each compressed model [59, 51, 21]. For efficiency comparisons to CoFi, we consider the task of producing a full family of compressed $\mathrm{BERT}_{\mathrm{base}}$ models with speedup targets ranging from $2\mathrm{x}$ to $15\mathrm{x}$ . In this setup, ZipLM requires only 115 epochs in total, whereas CoFi would require 560 epochs. Therefore, ZipLM is 4.87 times more efficient than CoFi. In terms of end-to-end runtime, ZipLM produces the entire family of compressed $\mathrm{BERT}_{\mathrm{base}}$ models on a single RTX A6000 GPU in $\sim 35$ hours on larger datasets (e.g. MNLI) and only $\sim 10$ hours on smaller ones (e.g. SST2). Finally, it is worth emphasizing that we have not taken into account the cost of hyper-parameter tuning in the above comparisons, but that this is very favorable to ZipLM: it uses a single set of hyper-parameters to produce an entire family of compressed models while other methods require hyper-parameter tuning for each model independently. + +![](images/44c6fbf4221d34fa54c5340ad5be7f0b12bd004ac6ed5a1eb25f87cdb6c99782.jpg) +Figure 5: Scaling laws of structured pruning vs. distillation on the standard BERT architecture. + +Table 4: Sensitivity to the number of calibration samples. + +
MethodNum samplesF1 score at
1.5x2.0x
ZipLM482.348.4
3286.882.6
12886.883.6
51286.884.1
204887.184.1
409687.684.7
Kwon et al.204886.276.5
+ +Scaling laws for structured pruning. To further understand the accuracy-speedup trade-offs, we run ZipLM on larger speedup ratios, up to $55\mathrm{x}$ for $\mathrm{BERT}_{\text{large}}$ and $75\mathrm{x}$ for $\mathrm{BERT}_{\text{base}}$ . To the best of our knowledge, this is the first result in literature demonstrating that such extreme compression ratios are achievable with structured pruning without model collapse. In Figure 5, we compare these results against distillation-based downscaling of the BERT architecture [52]. The results clearly demonstrate that each of the pruned models, based either on $\mathrm{BERT}_{\text{large}}$ or $\mathrm{BERT}_{\text{base}}$ , significantly outperforms comparable pre-trained variants. An emergent behavior that can be observed is that structurally pruned models tend to follow a linear scaling law, meaning that the accuracy decreases linearly with the increase of the speedup ratio, at a slope given by the original model. Fitting linearly via least squares produces the following expressions for the accuracy-speedup relationship: $\mathrm{F1}_{\text{large}} \approx 92.1 - 0.3 \times \mathrm{speedup}_{\text{large}}$ , and $\mathrm{F1}_{\text{base}} \approx 90.3 - 0.6 \times \mathrm{speedup}_{\text{base}}$ . Thus, the rate of decrease in accuracy for $\mathrm{BERT}_{\text{base}}$ is twice as large as that of $\mathrm{BERT}_{\text{large}}$ , which can be attributed to the presence of more redundant representations in the larger model, making it more resilient to pruning. In Appendix G we provide additional analysis of the structure of pruned models. + +# References + +[1] Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791, 2019. +[2] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. +[3] Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with structured dropout. In International Conference on Learning Representations, 2019. +[4] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833, 2018. +[5] Elias Frantar and Dan Alistarh. SPDY: Accurate pruning with speedup guarantees. arXiv preprint arXiv:2201.13096, 2022. +[6] Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022. +[7] Elias Frantar, Eldar Kurtic, and Dan Alistarh. M-fac: Efficient matrix-free approximations of second-order information. Advances in Neural Information Processing Systems, 34, 2021. +[8] Elias Frantar, Sidak Pal Singh, and Dan Alistarh. Optimal Brain Compression: A framework for accurate post-training quantization and pruning. arXiv preprint arXiv:2208.11580, 2022. Accepted to NeurIPS 2022, to appear. +[9] Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630, 2021. +[10] Aaron Gokaslan and Vanya Cohen. Openwebtext corpus, 2019. +[11] Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789-1819, 2021. +[12] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI 16, pages 544-560. Springer, 2020. +[13] Babak Hassibi and David Stork. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, 5, 1992. +[14] Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4340-4349, 2019. +[15] Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), pages 784-800, 2018. +[16] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7), 2015. +[17] Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. Journal of Machine Learning Research, 22(241):1-124, 2021. +[18] Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Dynabert: Dynamic bert with adaptive width and depth. Advances in Neural Information Processing Systems, 33:9782-9793, 2020. + +[19] Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry. Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks. Advances in Neural Information Processing Systems, 34:21099-21111, 2021. +[20] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2704-2713, 2018. +[21] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Tinybert: Distilling bert for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, 2020. +[22] Aran Komatsuzaki. One epoch is all you need. arXiv preprint arXiv:1906.06669, 2019. +[23] Eldar Kurtic and Dan Alistarh. Gmp*: Well-tuned global magnitude pruning can outperform most bert-pruning methods. arXiv preprint arXiv:2210.06384, 2022. +[24] Eldar Kurtic, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, and Dan Alistarh. The optimal bert surgeon: Scalable and accurate second-order pruning for large language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4163-4181, 2022. +[25] Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Bill Nell, Nir Shavit, and Dan Alistarh. Inducing and exploiting activation sparsity for fast inference on deep neural networks. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5533–5543, Virtual, 13–18 Jul 2020. PMLR. +[26] Woosuk Kwon, Sehoon Kim, Michael W Mahoney, Joseph Hassoun, Kurt Keutzer, and Amir Gholami. A fast post-training pruning framework for transformers. arXiv preprint arXiv:2204.09656, 2022. +[27] François Lagunas, Ella Charlaix, Victor Sanh, and Alexander Rush. Block pruning for faster transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10619–10629, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. +[28] Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumont, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierrick Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175-184. Association for Computational Linguistics, November 2021. +[29] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. +[30] Yawei Li, Kamil Adamczewski, Wen Li, Shuhang Gu, Radu Timofte, and Luc Van Gool. Revisiting random channel pruning for neural network compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 191-201, 2022. +[31] Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, and Daniela Rus. Provable filter pruning for efficient neural networks. arXiv preprint arXiv:1911.07412, 2019. +[32] Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. Awq: Activation-aware weight quantization for lIm compression and acceleration. arXiv preprint arXiv:2306.00978, 2023. + +[33] Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression. In International Conference on Machine Learning, pages 7021-7032. PMLR, 2021. +[34] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, pages 2736-2744, 2017. +[35] Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, et al. Deja vu: Contextual sparsity for efficient llms at inference time. In International Conference on Machine Learning, pages 22137-22176. PMLR, 2023. +[36] Peter Mattson, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, David Patterson, Guenther Schmuelling, Hanlin Tang, et al. Mlperf: An industry standard benchmark suite for machine learning performance. IEEE Micro, 40(2):8-16, 2020. +[37] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models, 2016. +[38] Markus Nagel, Rana Ali Amjad, Mart Van Baalen, Christos Louizos, and Tijmen Blankevoort. Up or down? adaptive rounding for post-training quantization. In International Conference on Machine Learning, pages 7197-7206. PMLR, 2020. +[39] NeuralMagic. Deep sparse: A fast cpu inference engine, 2021. +[40] Matan Ben Noach and Yoav Goldberg. Compressing pre-trained language models by matrix decomposition. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 884-889, 2020. +[41] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. +[42] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In EMNLP, 2016. +[43] Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. Poor man's bert: Smaller and faster transformer models. arXiv preprint arXiv:2004.03844, 2020. +[44] Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019. +[45] Victor Sanh, Thomas Wolf, and Alexander Rush. Movement pruning: Adaptive sparsity by fine-tuning. Advances in Neural Information Processing Systems, 33:20378-20389, 2020. +[46] S. Shankar. Identifying quora question pairs having the same intent. 2017. +[47] Sidak Pal Singh and Dan Alistarh. Woodfisher: Efficient second-order approximation for neural network compression. Advances in Neural Information Processing Systems, 33, 2020. +[48] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642, 2013. +[49] Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Aliari Zonouz, and Bo Yuan. Chip: Channel independence-based pruning for compact neural networks. Advances in Neural Information Processing Systems, 34:24604-24616, 2021. + +[50] Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. Patient knowledge distillation for bert model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323-4332, 2019. +[51] Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. Mobilebert: a compact task-agnostic bert for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158–2170, 2020. +[52] Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: The impact of student initialization on knowledge distillation. ArXiv, abs/1908.08962, 2019. +[53] Ashish Vavwani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. +[54] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. ArXiv, abs/1804.07461, 2018. +[55] Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems, 33:5776–5788, 2020. +[56] Ziheng Wang, Jeremy Wohlwend, and Tao Lei. Structured pruning of large language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6151-6162, 2020. +[57] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics, 2018. +[58] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. +[59] Mengzhou Xia, Zexuan Zhong, and Danqi Chen. Structured pruning learns compact and accurate models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1513-1528, 2022. +[60] Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. Bert-of-theseus: Compressing bert by progressive module replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7859-7869, 2020. + +# A Compound Compression for Edge Deployment + +Deploying large language models in edge environments requires running inference on low-power devices such as CPUs. Therefore, we follow the compound compression approach from [24] which bundles together structured, unstructured pruning, and quantization for efficient inference on CPUs. We start with ZipLM structurally pruned models, and apply on top the state-of-the-art oBERT unstructured pruning method [24] to $80\%$ sparsity. After structured and unstructured pruning, we apply quantization-aware-training (QAT) [20] to quantize FP32 weights into INT8 representations. We benchmark these compound compressed models by running inference in the DeepSparse [39] engine, on a single-core of Intel Cascade Lake CPU. In this setting, we compare our results against the compound compression pipeline of [24] which applies layer dropping as a form of structured pruning. As can be seen from Figure 6, when we substitute layer dropping with a principled structured pruning via ZipLM, the resulting compound compressed models achieve very competitive latency-vs-accuracy performance in the edge-inference regime. At full accuracy recovery, ZipLM improves the speedup from 3x to 13x, while at the largest compression ratio ZipLM improves the speedup from 30x to 50x. + +![](images/3a2a4f33e16dd2c30d228f10f96429c41d856c87805fe74f1508ccbff8f651af.jpg) +Figure 6: Improvements in CPU-inference speedups for compound compressed $\mathrm{BERT}_{\mathrm{base}}$ models on the SQuADv1.1 task when ZipLM is used for structured pruning. End-to-end latency indicated by the dashed line. + +# B Ablation Studies + +In Table 5, we present ablation results for ZipLM and CoFi, with and without their respective layerwise distillation techniques. ZipLM outperforms CoFi in all tasks when both methods use distillation, and in three out of four when distillation is not used. For example, ZipLM outperforms CoFi with a significant 3 point increase in F1 score on the SQuAD task in both setups. Furthermore, when comparing ZipLM results with and without layer-wise distillation, it can be observed that benefits are pronounced for low data tasks, where accuracy improvements reach up to 2 points. + +# C Additional GLUE results + +Due to space constraints, in Figure 3 we present results only on four GLUE tasks. Therefore, for completeness, in Figure 7 we present results on the remaining four GLUE tasks, namely: CoLA, MRPC, STS-B, and RTE. Results show the same trends as the other four GLUE tasks, with large improvements for ZipLM, especially at higher compression rates. + +Table 5: Comparison of ZipLM and CoFi dev-set results, with and without layer-wise distillation. + +
SST-2 acc.QNLI acc.MNLI m-acc.SQuAD F1
CoFi90.486.180.682.6
ZipBERTbase91.788.681.785.7
CoFi w/o Llayer91.185.179.782.5
ZipBERTbaw/o Ltoken89.286.581.285.7
+ +![](images/7f3b07d63cd6a85a71c805f090b60efd110a876fdd6662e1f0bbf246c0645582.jpg) +CoFi + +![](images/5b6ee59b0857f5420b74ebcacaa7da5412e57fd223a12d0d16337e6cff275db6.jpg) +Tiny + +![](images/ff6b3b07ca23438127da55ecea5812ee943be5537fb7132e207d04aec59392e5.jpg) +MiniLM + +![](images/e19df038adf3f422d8ae30ba1ed488e2b65713eb4d7207dff5f2e84b965b995a.jpg) +Low-Rank BERT + +![](images/e3ece136fffcc6fcdcda31c12efd08b569005060af362144f59b187e7dc862a9.jpg) +BERT-PKD + +![](images/609d3a4c8d06c6b7a5500203ae1a2ef371c4ab98df54a4e063c73900cd7c5914.jpg) + +![](images/5386dc7dd75cf17eb9a6bbea4a411010f6bb2095a28161a42ce7bf75fc69b17e.jpg) +Figure 7: Structured compression of $\mathrm{BERT}_{\mathrm{base}}$ on CoLA, MRPC, STS-B, and RTE tasks. Dashed horizontal lines represent full accuracy recovery of the uncompressed model. + +![](images/87cecef32cb2c93c064c4a57a717a522021679f39e8c96572967da86bea63a3e.jpg) +MRPC + +![](images/f296072117e4197673fff32324a8df82e49ec7fd2bad5ccf5b67c71ee3250748.jpg) + +![](images/0d6d4542b0b03a6b4035cfd615af9fc9003fa899ec00e6cf36982932bc084202.jpg) + +# D Additional Validation + +Evaluating and comparing compressed models on the development set (dev-set) is standard practice, as it enables comparisons with off-the-shelf results from the literature. However, an implicit assumption behind such comparisons is that all methods tune their hyper-parameters only on a subset of the dev-set before evaluating and reporting results on all samples, which is not always the case. Moreover, specifically-tuned hyper-parameters can lead to large performance differences, especially when compressing LLMs [23]. To ensure that there is no such "overfitting" on the dev-set, in Table 6 we compare ZipLM against the prior state-of-the-art CoFi approach on unseen test-set, obtained by submitting predictions to the official GLUE evaluation server. The results show consistent improvements over CoFi, on both dev- and test-sets. + +# E Latency table used for ZipLM pruning + +As described in Section 3.2, we record the time to run an attention block, including all overheads, with $0, \ldots, N_{\mathrm{heads}} - 1$ heads pruned (pruning everything with runtime 0 is also considered) and similarly for the fully-connected block with the intermediate dimension shrunk by a factor of $0.9^i$ , for $i = 0, \ldots, 42$ ; in relative steps of $10\%$ up until $\approx 99\%$ sparsity, following [5]. In Table 7 we present an example of such a latency table used in ZipLM pruning approach. + +# F Speedup Evaluations + +As shown in Figure 1, ZipLM is based on measuring runtimes of higher-level modules, such as attention heads and fully connected matrices, rather than low-level operators. This makes our + +Table 6: Dev- and test-set comparison of ZipBERTbase and CoFi models with comparable speedups. + +
dev-settest-set
CoFiZipBERTbaseCoFiZipBERTbase
QNLI, acc.86.188.685.888.4
SST-2, acc.90.491.788.291.8
MNLI, m-acc.80.681.780.781.9
MNLI, mm-acc.80.782.079.980.6
SQuAD, F182.685.7N/AN/A
+ +Table 7: An example of the latency table used by the ZipLM pruning approach. + +
Intermediate sizeLatency (ms)Number of headsLatency (ms)
307211.9127.9
18147.4106.7
13225.885.8
3021.664.4
1301.043.2
760.921.9
330.700
+ +Table 8: Comparison of target (desired) inference speedups with achieved (on-device measured) speedups obtained with our ZipLM pruning approach. + +
BERTbase on SQuADv1.1BERTlarge on SQuADv1.1
Target speedupAchieved speedupDeviationTarget speedupAchieved speedupDeviation
21.98-1.00%22.01+0.50%
44.05+1.25%44.05+1.25%
66.16+2.67%66.09+1.50%
88.25+3.12%88.27+3.37%
1010.36+3.60%1010.33+3.30%
1212.31+2.58%1212.46+3.83%
1414.33+2.35%1414.74+5.28%
+ +approach independent of underlying optimizations in different inference engines and frameworks, which usually perform further optimizations such as operator-fusion. Our runtime lookup table contains information about the runtime of a Transformer layer with different numbers of attention heads, and various dimensions of the fully connected matrices. This implies that we measure runtimes of a layer with 12 heads, 11 heads, 10 heads, and so on, as well as the runtimes of fully connected matrices with hidden sizes ranging from 3072 to 0. We utilize this information to guide pruning decisions. + +To fully validate the ability of ZipLM to compress the model while satisfying desired speedup constraints via the described approach, we provide the timing results in Table 8, comparing the desired (target) speedup and the achieved (measured) speedup for different models. + +As can be seen from the Table 8, the deviation between the desired (target) and the achieved (measured) speedup is at most $5.28\%$ . This confirms that our approach indeed provides reliable runtime information to guide the pruning decisions. + +# G Structure of Pruned Models + +Through a comprehensive examination of ZipLM pruned BERT models across all datasets considered in Section 4, we aim to identify trends in the pruning of key components of the Transformer layer, namely attention heads and intermediate size, needed to achieve a specific speedup target. As illustrated in Figure 8, we observe that the intermediate size is pruned at a higher rate relative to attention heads, which aligns with the fact that the intermediate size dictates the dimensions of the two large linear layers in the feed-forward part of the Transformer block. For instance, to attain a 2x speedup, roughly $60\%$ of the intermediate size and $40\%$ of the attention heads need to be removed. Additionally, in Figure 9, we visualize the entire encoder size needed to reach a specific speedup target. Interestingly, we find that 15x faster models retain on average only $2\%$ of intermediate size and $6\%$ of attention heads which amounts to only 2.9M parameters overall, while at the same time recovering more than $95\%$ of the uncompressed model's accuracy (see Figure 3). + +Additionally, in Figures 10, 11, 12, 13 we visualize the number of remaining heads and intermediate size across all Transformer layers and various speedup targets on a subset of GLUE datasets. + +![](images/e84b3ac7eb75798458f7b81d102443721ea90cf5de783ffd69049d20eb6bc31d.jpg) +Figure 8: Percentage of pruned attention heads and intermediate size to reach a specific speedup target with ZipLM. + +![](images/a7645b38e01bffd8c68d8bdfe30bbbd7a87d0505faac658162d8f29e7265fe7f.jpg) +Figure 9: Encoder size vs. speedup factor of ZipLM pruned BERTbase models, averaged over all considered datasets in Section 4. + +![](images/c96019584e66a2f63dd6459c37a753e3123d0caf9533d15e2e5a8baa06b99ca4.jpg) +Figure 10: Remaining number of attention heads and intermediate size across all layers of the ZipLM compressed $\mathrm{BERT}_{\mathrm{base}}$ model at various speedups and MNLI dataset. + +![](images/6050c6ec17512037b79a9aaca092e9f23377a369261b9779ac68aa7ea7550c63.jpg) + +# H Experiments - Additional Results + +In Table 9 we report accuracy and model size of ZipLM pruned models visualized in Section 4, in Figures 2 and 3. + +![](images/05dda968b6a545942800bae84d5858d34c581aa53b93daa17c06066e11d40b70.jpg) +Figure 11: Remaining number of attention heads and intermediate size across all layers of the ZipLM compressed $\mathrm{BERT}_{\mathrm{base}}$ model at various speedups and QNLI dataset. + +![](images/480d914d74976d828584e60a213d037a0e79503d778e0128439a3ffb98c4f6af.jpg) + +![](images/65a184773f12cb03f1d4b62cfba66e9eaa79b18d1fb95a4633c4e609615005a3.jpg) +Figure 12: Remaining number of attention heads and intermediate size across all layers of the ZipLM compressed $\mathrm{BERT}_{\mathrm{base}}$ model at various speedups and QQP dataset. + +![](images/76ffd7fb191ecb0f806048b13e188aca42b1a4cb08f8bc5f22109ab0453bd7e0.jpg) + +![](images/643d72becf825c262a62c5fa97d710ca4b8259b4ec5862b15f7837a1cbcfbd97.jpg) +Figure 13: Remaining number of attention heads and intermediate size across all layers of the ZipLM compressed $\mathrm{BERT}_{\mathrm{base}}$ model at various speedups and SST-2 dataset. + +![](images/2d15c795643f89e884182df13bd9a0816e7e7c864fd486b33aedf8d233f4bd19.jpg) + +# I Hyper-parameters for Reproducibility + +To facilitate reproducibility, we conduct experiments in the open-source Transformers library [58], and use publicly available datasets [28]. We plan to open-source our entire framework which supports one-shot and gradual structured pruning via SparseML [25], making it very easy to experiment with other models and datasets. In addition to our code, we plan to open-source all of our compressed models via the popular HuggingFace Hub. In Table 10 we report hyper-parameters used to produce our ZipLM pruned models in Section 4. Because of the excessive memory overhead, we don't make use of any kind of knowledge distillation when pruning the GPT2 model. Following insights from DistilGTP2, we hypothesize that this can further improve our results. We follow [22] and disable dropout regularization while pre-training ZipGPT2 models at OpenWebTextCorpus dataset. + +# J Broader Impact and Limitations + +Our results contribute to the line of work on efficient language models. Thus, it should help reduce the energy and monetary cost of inference over such models, and allow them to be used without access to powerful hardware. While this is a mainly positive outcome, it also reduces the cost of employing these models for detrimental purposes, such as spam generation. Thus, this significant cost reduction for inference should also be seen as further motivation for methods to ensure safe usage of these models, such as watermarking or alignment. + +Table 9: Accuracy and model size for ZipLM pruned models in Section 4. + +
BERTbaseBERTlarge
SpeedupQNLIEncoder size (M)MNLIEncoder size (M)SST2Encoder size (M)QQPEncoder size (M)SQuADv1F1Encoder size (M)SQuADv1F1Encoder size (M)
2x91.438.084.838.593.438.791.337.889.137.391.6141.1
3x91.123.884.823.593.424.191.323.888.623.491.488.3
4x90.916.984.017.193.017.291.316.888.016.891.163.1
5x90.812.584.013.593.013.591.113.087.513.090.848.5
6x90.49.583.510.593.011.091.110.286.710.490.239.1
7x89.88.083.28.893.09.090.98.186.18.789.932.7
8x89.26.483.17.593.07.690.96.885.77.589.727.5
9x89.15.782.86.393.06.790.85.885.36.289.323.8
10x88.64.982.75.493.05.790.84.984.25.389.120.9
11x88.64.082.54.792.74.990.74.383.84.788.818.4
12x87.83.681.74.191.74.290.64.183.24.088.416.4
13x87.63.281.33.591.73.890.63.782.53.487.914.9
14x87.42.881.23.391.73.690.33.381.73.287.713.7
15x87.22.680.82.990.73.290.32.981.42.987.612.5
+ +Table 10: Hyper-parameters used for gradual ZipLM runs in Section 4. + +
BERTbaseBERTlargeGPT2
batch-size16 SQuADv132 GLUE128
max-seq-length384 SQuADv1128 GLUE1024
finetune before pruning3 epochs50k steps
finetune in-between pruning steps8 epochs10 epochs2 epochs
LR schedule in-between pruning stepslinear decaylinear decay
initial LR8e-55e-51e-3
#calibration samples2048512
speedup-targets{2, 3, 4, 5, ..., 15}x{1.5, 2, 2.5, 3}x
knowledge distillation λ101.0
knowledge distillation λ21.0 SQuADv10.5 GLUE0
knowledge distillation λ30.0 SQuADv10.5 GLUE0
weight-decay0.030.050
+ +As any academic study, our work is not without its limitations. All of our benchmarks are focused on English-language datasets and therefore our results do not provide insights into compression effects for low-data languages. Unfortunately, this limitation is inherent to all of the existing works on compression due to the lack of standardized benchmarks. Given that our structured pruning approach relies on a small sample of calibration data to perform pruning decisions, we hypothesize that our approach should be able to provide satisfying results in the low-data setup as well. At the moment we do not have data to support these claims, but we see it as an opportunity for future work. \ No newline at end of file diff --git a/ziplminferenceawarestructuredpruningoflanguagemodels/images.zip b/ziplminferenceawarestructuredpruningoflanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c48f61e6b8a9ca9de79105973cc5c62c33167709 --- /dev/null +++ b/ziplminferenceawarestructuredpruningoflanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93e94f7145c36d145549d797d7078ca0d6680432b767a2ae5ce9766444984d51 +size 954499 diff --git a/ziplminferenceawarestructuredpruningoflanguagemodels/layout.json b/ziplminferenceawarestructuredpruningoflanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e2e64a5bc72b398ebe20181f7527cbccd3f79d94 --- /dev/null +++ b/ziplminferenceawarestructuredpruningoflanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fab775c86259f928a251c04b6cf8528807043e56b9c207c62fee06a86bab3ef0 +size 569939 diff --git a/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_content_list.json b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f0a766ff21ab1634b23629e427b8e55e1731da3c --- /dev/null +++ b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2455f11976891ce136806d54eca9bcbcd2e5ab7e9d7988a44f6b9ac53bfc7ae +size 122004 diff --git a/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_model.json b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9b64be5f8aee9bb0bcaf1f42357a5106b051d323 --- /dev/null +++ b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d1b3d7669c8cb9c5ecd38898a269a1edcd0ebe20ff60d9f16d8645ed37d0d67 +size 139500 diff --git a/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_origin.pdf b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f75e73bca601d3775618f4fb1f054bdb9e9a619f --- /dev/null +++ b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/54544d0f-13ec-4562-b963-c4fade966b1a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bee52ed270de020fd647f541ff21d812641b1532a160a2b13f414e749d6dec5e +size 2212879 diff --git a/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/full.md b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4dc2027136d2b97a1b0ac130e4a519ac5fb75dc9 --- /dev/null +++ b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/full.md @@ -0,0 +1,470 @@ +# ZoomTrack: Target-aware Non-uniform Resizing for Efficient Visual Tracking + +Yutong Kou $^{1,2}$ , Jin Gao $^{1,2*}$ , Bing Li $^{1,5}$ , Gang Wang $^{4*}$ , Weiming Hu $^{1,2,3}$ , Yizheng Wang $^{4}$ , Liang Li $^{4*}$ + +$^{1}$ State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), CASIA $^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences $^{3}$ School of Information Science and Technology, ShanghaiTech University $^{4}$ Beijing Institute of Basic Medical Sciences $^{5}$ People AI, Inc kouyutong2021@ia.ac.cn, {jin.gao,bli,wmhu}@nlpr.ia.ac.cn, liang.li.brain@aliyun.com, g_wang@foxmail.com, yzwang57@sina.com + +# Abstract + +Recently, the transformer has enabled the speed-oriented trackers to approach state-of-the-art (SOTA) performance with high-speed thanks to the smaller input size or the lighter feature extraction backbone, though they still substantially lag behind their corresponding performance-oriented versions. In this paper, we demonstrate that it is possible to narrow or even close this gap while achieving high tracking speed based on the smaller input size. To this end, we non-uniformly resize the cropped image to have a smaller input size while the resolution of the area where the target is more likely to appear is higher and vice versa. This enables us to solve the dilemma of attending to a larger visual field while retaining more raw information for the target despite a smaller input size. Our formulation for the nonuniform resizing can be efficiently solved through quadratic programming (QP) and naturally integrated into most of the crop-based local trackers. Comprehensive experiments on five challenging datasets based on two kinds of transformer trackers, i.e., OSTrack and TransT, demonstrate consistent improvements over them. In particular, applying our method to the speed-oriented version of OSTrack even outperforms its performance-oriented counterpart by $0.6\%$ AUC on TNL2K, while running $50\%$ faster and saving over $55\%$ MACs. Codes and models are available at https://github.com/Kou-99/ZoomTrack. + +# 1 Introduction + +In visual tracking, many efforts have been made to improve the discrimination and localization ability of the tracking models, including deeper networks [18, 34], transformer feature fusion [8, 31, 21, 19], joint feature extraction and interaction [33, 7, 9], and so on. However, most of the recent tracking algorithms, including the transformer-based [33, 7, 19, 9], still follow the paradigm proposed by Bertinetto et al. [3], in which a small exemplar image cropped from + +![](images/44686d191faa6991b30f93a079e64ae1abf03fe3555d8cbab9cd8af34fff1c42.jpg) +Comparison of trackers on TNL2K +Figure 1: Our method consistently improves the OSTrack and TransT baselines with negligible computation. + +the first frame is used to locate the target within a large search image cropped from one of the subsequent frames. The crop size, which determines the visual field size, is derived from a reference bounding box plus a margin for context. Both of the crops are uniformly resized to fixed sizes to facilitate the training and testing of the tracker. + +Due to the fact that the complexity of the transformers scales quadratically with input size, many transformer trackers [33, 7, 19, 9] propose both speed-oriented versions with the smaller input size or feature extraction backbone and performance-oriented versions with larger ones. Thanks to the enabled larger visual field size or stronger feature representation, the performance-oriented versions always outperform their speed-oriented counterparts in performance, though the tracking speed is severely reduced. For instance, an increased $1.5 \times$ input size for OSTrack [33] will cause doubled Multiply-Accumulate Operations (MACs) and nearly halved tracking speed. Thus, it is natural to pose the question: Is it possible to narrow or even close this performance gap while achieving high tracking speed based on a smaller input size? + +Inspired by the human vision system (HVS), we propose to non-uniformly resize the attended visual field to have smaller input size for visual tracking in this paper. Human retina receives about $100\mathrm{MB}$ of visual input per second, while only 1 MB of visual data is able to be sent to the central brain [36]. This is achieved by best utilizing the limited resource of the finite number of cones in HVS. More specifically, the density of cone photoreceptors is exponentially dropped with eccentricity (i.e., the deviation from the center of the retina), causing a more accurate central vision for precise recognition and a less accurate peripheral vision for rough localization [36]. This enables us to solve the dilemma of attending to larger visual field while retaining more raw information for the target area despite the smaller input size. + +In our formulation for the non-uniform resizing, the area where the target is more likely to appear is magnified to retain more raw information with high resolution and facilitate precise recognition, whereas the area where the target is less likely to appear is shrunk yet preserved to facilitate rough localization when encountering fast motion or tracking drift. The key to our method is to design an efficient and controllable resizing module based on a small controllable grid. On top of this grid, we define a zoom energy to explicitly control the scale of magnification for the important target area, a rigid energy to avoid extreme deformation and a linear constraint to avoid cropping the source image during resizing. The important area is determined by the previous tracking result as a temporal prior. This formulation can be solved efficiently through + +![](images/34008203d0ba8989caea5d30203497cdb0cdd6d400aeb5b51c211bc005039404.jpg) +Figure 2: We achieve non-uniform resizing by sampling according to a manipulable control grid $\mathfrak{g}$ , which is generated by solving for best $d_k^{col}$ and $d_l^{row}$ that minimize two energies that lead to controllable magnification and less deformation. + +quadratic programming (QP), whose solution is used to manipulate the controllable grid. Finally, the ideal non-uniform resizing is achieved by sampling according to the controllable grid (See Fig. 2). + +Our method can be easily integrated into plenty of tracking algorithms. We select the popular hybrid CNN-Transformer tracker TransT [8] and one-stream Transformer tracker OSTrack [33] to verify the efficiency and effectiveness of our method. Comprehensive experiments are conducted on five large-scale benchmarks, including GOT-10k [16], LaSOT [11], $\mathrm{LaSOT}_{ext}$ [12], TNL2k [30], TrackingNet [23]. We observe consistent improvements over the baselines with negligible computational overhead. In particular, we improve the speed-oriented version of OSTrack to achieve $73.5\%$ AO, $50.5\%$ AUC and $56.5\%$ AUC on GOT-10k, $\mathrm{LASOT}_{ext}$ and TNL2k respectively, which are on par with the performance-oriented counterpart while saving over $55\%$ MACs and running $50\%$ faster (see Fig. 1). In other words, the performance gap between speed-oriented and performance-oriented trackers can be narrowed or even closed through our method. + +In summary, our contributions are as follows: (i) We propose ZoomTrack, an efficient non-uniform resizing method for visual tracking that bridge the gap between speed-oriented and performance-oriented trackers with negligible computation; (ii) We formulate the non-uniform resizing as an explicitly controlled magnification of important areas with restriction on extreme deformation, + +enabling an HVS-inspired data processing with limited resource; (iii) Extensive experiments based on two baseline trackers on multiple benchmarks show that our method achieves consistent performance gains across different network architectures. + +# 2 Related Work + +Efficient Visual Tracking. A lot of real-world applications require tracking algorithms to have high speed. Consequently, many efforts have been made to improve the efficiency of visual tracking. Yan et al. [32] use NAS to search for a lightweight network architecture for visual tracking. Borsuk et al. [6] design a novel way to incorporate temporal information with only a single learnable parameter which achieves higher accuracy at a higher speed. Blatter et al. [5] design an efficient transformer layer and achieves real-time tracking on CPU. Shen et al. [26] propose a distilled tracking framework to learn small and fast trackers from heavy and slow trackers. Some other works [17, 10] explore the quickly advancing field of vision transformer pre-training [15, 29] and architecture designing [13] to improve both tracking accuracy and efficiency. Previous methods focus on designing either a better network [32, 5, 6, 17, 10] or a new training framework [26] to improve the efficiency of trackers, whereas our work focuses on the non-uniform resizing of the input for the sake of efficiency. Our approach is more general and orthogonal to them. + +Non-uniform Resizing for Vision Tasks. Image resizing is a common image processing operation. Yet, the standard uniform resizing is not always satisfactory in many applications. Avidan and Shamir [1] resize an image by repeatedly carving or inserting seams that are determined by the saliency of image regions. Panozzo et al. [24] use axis-aligned deformation generated from an automatic or human-appointed saliency map to achieve content-aware image resizing. Recasens et al. [25] adaptively resize the image based on a saliency map generated by CNN for image recognition. Zheng et al. [37] learn attention maps to guide the sampling of the image to highlight details for image recognition. Thavamani et al. [27] use the sampling function of [25] to re-sample the input image based on temporal and dataset-level prior for autonomous navigation. Bejnordi et al. [2] learn a localization network to magnify salient regions from the unmediated supervision of [37] for video object detection. Thavamani et al. [28] propose an efficient and differentiable warp inversion that allows for mapping labels to the warped image to enable the end-to-end training of dense prediction tasks like semantic segmentation. Existing methods either use time-consuming feature extractors to generate saliency [1, 24, 25, 37] or use heuristic sampling functions that cannot explicitly control the extent of the magnification, which may cause unsatisfactory magnification or extreme deformation [27, 2, 28]. In contrast, our approach directly generates the saliency map or important area using the temporal prior in tracking and proposes a novel sampling method that can achieve controllable magnification without causing extreme deformation. + +# 3 Background and Analysis + +# 3.1 Revisiting Resizing in Deep Tracking + +Before introducing our method, we first briefly revisit the resizing operation in the deep tracking pipeline. Given the first frame and its target location, visual trackers aim to locate the target in the subsequent frames using a tracking model $\phi_{\theta}(\mathbf{z},\mathbf{x})$ , where $\mathbf{z}$ is the template patch from the first frame with the target in the center and $\mathbf{x}$ is the search patch from one of the subsequent frames. Both $\mathbf{z}$ and $\mathbf{x}$ are generated following a crop-then-resize manner. We next detail the cropping and resizing operations. + +Image crop generation. Both the template and search images are cropped based on a reference bounding box $b$ , which can be the ground truth annotation for the template image or the previous tracking result for the search image. Denote a reference box as $b = (b^{cx}, b^{cy}, b^{w}, b^{h})$ , then the crop size can be calculated as + +$$ +W = H = \sqrt {\left(b ^ {w} + (f - 1) \times c ^ {w}\right) \left(b ^ {h} + (f - 1) \times c ^ {h}\right)}, \tag {1} +$$ + +where $f$ is the context factor controlling the amount of context to be included in the visual field, and $c^w$ and $c^h$ denote unit context amount which is solely related to $b^w$ and $b^h$ . Typical choice of unit context amount is $c^w = b^w$ , $c^h = b^h$ [33, 31, 9], or $c^w = c^h = (b^w + b^h) / 2$ [8, 19]. A higher context factor means a larger visual field and vice versa. Usually, the search image has a larger + +![](images/7bf6891b5e50f252eab904cf77dd748c20bc69e9c86e7c9f03b0ba5153afef59.jpg) +Figure 3: Experiments on LaSOT [11] based on OSTrack [33] show that: (left) Thanks to the larger attended visual field, simultaneously increasing search size and context factor leads to consistent performance improvement at the cost of heavy computational burden; (right) Increasing the attended visual field by enlarging context factor while fixing the input size to limit the available resources leads to degraded performance due to the decreased target area resolution. + +![](images/7972916f8a741169facde3b96823897449a624508eb5bee5f84a813686ed1ef8.jpg) + +context factor for a larger visual field, and the template image has a small context factor providing minimal necessary context. The image crop $\mathcal{I}$ is obtained by cropping the area centered at $(b^{cx}, b^{cy})$ with width $W$ and height $H$ . Areas outside the image are padded with a constant value. Finally, box $b$ is transformed to a new reference bounding box $r = (r^{cx}, r^{cy}, r^{w}, r^{h})$ on the image crop $\mathcal{I}$ . + +Image crop resizing. To facilitate the batched training and later online tracking, the image crop has to be resized to a fixed size. Given an image crop $\mathcal{I}$ with width $W$ and height $H$ , a fix-sized image patch $\mathcal{I}'$ with width $w$ and height $h$ is obtained by resizing $\mathcal{I}$ . Specifically, $h \times w$ pixels in $\mathcal{I}'(x', y')$ are sampled from $\mathcal{I}(x, y)$ according to a mapping $\mathcal{T}: \mathbb{R}^2 \to \mathbb{R}^2$ , i.e., + +$$ +\mathcal {I} ^ {\prime} \left(x ^ {\prime}, y ^ {\prime}\right) = \mathcal {I} \left(\mathcal {T} \left(x ^ {\prime}, y ^ {\prime}\right)\right). \tag {2} +$$ + +Note that $\mathcal{T}$ does not necessarily map integer index to integer index. Thus, some pixels in $\mathcal{I}'$ may be sampled from $\mathcal{I}$ according to the non-integer indices. This can be realized by bilinear interpolating from nearby pixels in $\mathcal{I}$ . Current methods [33, 9, 19, 7] use uniform mapping $\mathcal{T}_{\text{uniform}}$ to resize $\mathcal{I}$ . $\mathcal{T}_{\text{uniform}}$ is defined as $\mathcal{T}_{\text{uniform}}(x', y') = \left(\frac{x'}{s^x}, \frac{y'}{s^y}\right)$ , where $s^x = w / W$ , $s^y = h / H$ are called resizing factors and indicate the scaling of the image. When applying uniform resizing, the resizing factor is the same wherever on the image crop, which means the same amount of amplification or shrinkage is applied to the different areas of the entire image. + +# 3.2 Solving the Dilemma Between Visual Field and Target Area Resolution + +Although the crop-then-resize paradigm is applied in most of the popular deep trackers, the fixed input search size and context factor vary significantly across different algorithms. Generally, the performance can be improved by using a larger input size at the cost of decreasing tracking speed [19, 33]. To understand the key factors behind this, we conduct a series of experiments on LaSOT [11] based on OSTrack [33]. First, we simultaneously increase search size and context factor, which means the target resolution is roughly fixed and the tracker attends to a larger visual field. As shown in the left of Fig. 3, the AUC on LaSOT increases along with the input search size thanks to the larger attended visual field. Note that increasing the search size $(256\rightarrow 384)$ leads to almost doubled computational overhead (21.5G MACs $\rightarrow$ 41.5G MACs). Next, we limit the available resources by fixing the input size while increasing the attended visual field size. As shown in the right of Fig. 3, the AUC on LaSOT first increases and then decreases as the average target resolution2 keeps decreasing. This result demonstrates that the benefit of an enlarged visual field is gradually wiped out by the decrease in target resolution when the computational cost is fixed. + +Inspired by the data processing with limited resources in HVS, we believe the above dilemma of attending to a larger visual field while retaining more raw information for the target can be solved by replacing uniform resizing with non-uniform resizing. In visual tracking, the previous tracking result can serve as a strong temporal prior for the current target location. Based on this prior, we can determine the area where the target is likely to appear and magnify it to retain more raw information with high resolution. On the contrary, areas where the target is less likely to appear is shrunk to avoid decreasing the visual field when the input search size is fixed with limited resources. Notably, the + +magnification and shrinkage should not dramatically change the shape and aspect ratio of regions on the image to facilitate the robust learning of the appearance model. Based on the above analysis, we propose some guidelines for our non-uniform resizing, i.e., G1: Magnify the area where the target is most likely to appear; G2: Avoid extreme deformation; G3: Avoid cropping the original image $\mathcal{I}$ so that the original visual field is also retained. We detail how to incorporate these guidelines into our non-uniform resizing in the following section. + +# 4 Methods + +As discussed in Sec. 3.1, the resizing process from the source image $\mathcal{I}(x,y) \in \mathbb{R}^{H \times W \times 3}$ to the target image $\mathcal{I}'(x',y') \in \mathbb{R}^{h \times w \times 3}$ is a sampling operation controlled by a mapping $\mathcal{T}: (x',y') \to (x,y)$ . Our aim is to find the mapping $\mathcal{T}$ that best follows the guidelines (G1~G3), given a temporal prior box $r = (r^{cx}, r^{cy}, r^{w}, r^{h})$ that indicates the most possible location of the target. We first restrict the domain of the mapping to a few points and acquire a grid representation of the mapping $\mathcal{T}$ . Then, we formulate the guidelines as a QP problem based on the grid point intervals. By solving the QP problem via a standard solver [20], we can efficiently manipulate the controllable grid point intervals and achieve the ideal non-uniform resizing by sampling according to the final grid representation. + +# 4.1 Grid Representation for Non-uniform Resizing + +Recall the sampling operation in Eq. (2), pixel $(x', y')$ on the target image $\mathcal{I}' \in \mathbb{R}^{h \times w \times 3}$ is sampled from location $(x, y)$ on the source image $\mathcal{I} \in \mathbb{R}^{H \times W \times 3}$ . Thus, at least $h \times w$ pairs of $(x', y') \to (x, y)$ are needed for resizing. We use the exact $h \times w$ pairs to represent the mapping $\mathcal{T}$ , which can be defined as a grid $\mathcal{G} \in \mathbb{R}^{h \times w \times 2}$ , where $\mathcal{G}[y'][x'] = (x, y)$ , $x' = 1, \dots, w$ , $y' = 1, \dots, h$ , is the location on $\mathcal{I}$ that the target image pixel $\mathcal{I}'(x', y')$ should be sampled from. + +Controllable grid. The grid $\mathcal{G}$ is an extremely dense grid with $h\times w$ grid points (equivalent to the resolution of the target image $\mathcal{T}'$ ), which makes it computationally inefficient to be directly generated. To solve this issue, we define a small-sized controllable grid $\mathfrak{g}\in \mathbb{R}^{(m + 1)\times (n + 1)\times 2}$ with $m + 1$ rows and $n + 1$ columns ( $m < h, n < w$ ) to estimate $\mathcal{G}$ , where + +$$ +\mathfrak {g} [ j ] [ i ] = \mathcal {G} \left[ \frac {h}{m} j \right] \left[ \frac {w}{n} i \right], i = 0, \dots , n, j = 0, \dots , m \tag {3} +$$ + +Intuitively, $\mathfrak{g}[j][i]$ is the sample location on the source image for the target image pixel $\left(\frac{w}{n} i, \frac{h}{m} j\right)$ in $\mathcal{I}'$ . Once $\mathfrak{g}$ is determined, $\mathcal{G}$ can be easily obtained through bilinear interpolation on $\mathfrak{g}$ . + +Axis-alignment constraint. To avoid extreme deformation (G3) and reduce computational overhead, we add an axis-aligned constraint [27] to the grid $\mathfrak{g}$ . Specifically, grid points in the same row $j$ have the same y-axis coordinate $y_{j}$ , and grid points in the same column $i$ have the same x-axis coordinate $x_{i}$ . That is to say $\mathfrak{g}[j][i] = (x_i,y_j)$ , $i = 0,\dots,n$ , $j = 0,\dots,m$ . The reason why we use such an axis-aligned constraint is that we need to keep the axis-alignment of the bounding box (i.e., the four sides of the bounding box are parallel to the image boundary) after resizing, which is proven to be beneficial to the learning of object localization [27]. + +Grid representation. The source image is split into small rectangular patches $p(k,l), k = 1, \ldots, n, l = 1, \ldots, m$ by grid points. The top-left corner of $p(k,l)$ is $\mathfrak{g}[l - 1][k - 1]$ and its bottom-right corner is $\mathfrak{g}[l][k]$ . The width and height of the patch $p(k,l)$ are denoted as the horizontal grid point interval $d_k^{col}$ and the vertical grid point interval $d_l^{row}$ . $d_k^{col}$ and $d_l^{row}$ can be calculated as $d_k^{col} = x_k - x_{k-1}$ , $d_l^{row} = y_l - y_{l-1}$ . The axis-aligned controllable grid $\mathfrak{g}$ can be generated from grid point intervals $d_l^{row}$ , $d_k^{col}$ based on + +$$ +\mathfrak {g} [ j ] [ i ] = \left(\sum_ {k = 0} ^ {i} d _ {k} ^ {\text {c o l}}, \sum_ {l = 0} ^ {j} d _ {l} ^ {\text {r o w}}\right), \tag {4} +$$ + +where $d_0^{row} = d_0^{col} = 0$ for the sake of simplicity. + +Non-uniform resizing. According to Eq. (3), the resizing process can be visualized using two grids (see Fig. 2): a manipulatable grid on the source image $\mathfrak{g}$ and a fixed grid on the target image $\mathfrak{g}_t[j][i] = \left(\frac{w}{n} i, \frac{h}{m} j\right)$ . $\mathfrak{g}$ and $\mathfrak{g}_t$ split the source and target images into rectangular patches $p(k,l)$ and $p_t(k,l)$ respectively. Target image pixel at $\mathfrak{g}_t[j][i]$ is sampled from $\mathfrak{g}[j][i]$ on the source image. This + +process can be intuitively understood as resizing $p(k, l)$ to $p_t(k, l)$ . When the grid point intervals of $\mathfrak{g}$ are manipulated, the grid point intervals of $\mathfrak{g}_t$ remain unchanged. Thus, by reducing/ increasing the grid intervals of $\mathfrak{g}$ , the corresponding region on the target image is amplified/ shrunk. In this way, a non-uniform resizing operation that scales different image areas with different ratios can be obtained by dynamically manipulating the grid point intervals. + +# 4.2 QP Formulation Based on Grid Representation + +Given a reference bounding box $r = (r^{cx}, r^{cy}, r^w, r^h)$ , we aim to dynamically manipulate the grid point intervals to best follow the guidelines (G1~G3) proposed in Sec. 3.2. + +Importance score for the target area. According to G1, areas, where the target is most likely to appear, should be magnified. Following this guideline, we first define the importance score for the target area on the source image. An area has high importance if the target is more likely to appear in this area. As $r$ is a strong temporal prior for the current target location, we use a Gaussian function $G(x,y)$ centered at $(r^{cx},r^{cy})$ as the importance score function, i.e., + +$$ +G (x, y) = \exp \left(- \frac {1}{2} \left(\frac {(x - \mu_ {x}) ^ {2}}{\sigma_ {x} ^ {2}} + \frac {(y - \mu_ {y}) ^ {2}}{\sigma_ {y} ^ {2}}\right)\right), \tag {5} +$$ + +where $\mu_x = r^{cx}, \mu_y = r^{cy}, \sigma_x = \sqrt{\beta \times r^w}, \sigma_y = \sqrt{\beta \times r^h}$ . $\beta$ is a hyper-parameter controlling the bandwidth of the Gaussian function. Since we denote the rectangular area enclosed by the grid point intervals $d_l^{row}$ and $d_k^{col}$ as patch $p(k, l)$ , its importance score $S_{k,l}$ can be determined by the value of $G(x, y)$ at the center of patch $p(k, l)$ , i.e., + +$$ +S _ {k, l} = G \left(\left(k + \frac {1}{2}\right) \times \frac {W}{n}, \left(l + \frac {1}{2}\right) \times \frac {H}{m}\right) + \epsilon , \tag {6} +$$ + +where $\epsilon$ is a small constant to prevent extreme deformation and ensure stable computation. Here we use the patch center of a uniformly initialized grid ( $d_k^{col} = \frac{W}{n}$ and $d_l^{row} = \frac{H}{m}$ ). By doing so, $S_{k,l}$ is irrelevant of $d_k^{col}$ and $d_l^{row}$ , which is crucial for the QP formulation in the following. + +Zoom energy. Following G1, we design a quadratic energy $E_{zoom}$ to magnify the area where the target is most likely to appear. To achieve controllable magnification, we define a zoom factor $\gamma$ controlling the amount of magnification. If we want to magnify some area by $\gamma$ , the distances between sampling grid points should shrink by $\frac{1}{\gamma}$ . $E_{zoom}$ forces $d_k^{col}$ and $d_l^{row}$ to be close to the grid point intervals under uniform magnification by $\gamma$ , i.e., $\frac{1}{\gamma}\frac{H}{m}$ for $d_l^{row}$ and $\frac{1}{\gamma}\frac{W}{n}$ for $d_k^{col}$ . In detail, The zoom energy is defined as + +$$ +E _ {z o o m} = \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} S _ {k, l} ^ {2} \left(\left(d _ {l} ^ {r o w} - \frac {1}{\gamma} \frac {H}{m}\right) ^ {2} + \left(d _ {k} ^ {c o l} - \frac {1}{\gamma} \frac {W}{n}\right) ^ {2}\right). \tag {7} +$$ + +Rigid energy. To avoid extreme deformation (G2), we define a quadratic energy $E_{\text{rigid}}$ to restrict the aspect ratio change of patch $p(k,l)$ , which has width $d_k^{col}$ and height $d_l^{row}$ . When being uniformly resized, $p(k,l)$ should have width $\frac{W}{n}$ and height $\frac{H}{m}$ . Thus, the patch $p(k,l)$ is horizontally stretched by $\frac{n}{W} d_k^{col}$ and vertically stretched by $\frac{m}{H} d_l^{row}$ . To keep the aspect ratio mostly unchanged for the important area, the horizontal stretch should be similar to the vertical stretch. Therefore, the rigid energy can be defined as + +$$ +E _ {r i g i d} = \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} S _ {k, l} ^ {2} \left(\frac {m}{H} d _ {l} ^ {r o w} - \frac {n}{W} d _ {k} ^ {c o l}\right) ^ {2}. \tag {8} +$$ + +Linear constraint to avoid cropping the source image. According to G3, the resizing should not crop the source image so that the original visual field is also retained. That is to say, the grid $\mathfrak{g}$ should completely cover the entire source image. This requirement can be interpreted as a simple linear constraint that forces the sum of vertical grid point intervals $d_l^{row}$ to be $H$ and the sum of horizontal grid point intervals $d_k^{col}$ to be $W$ : $\sum_{l=1}^{m} d_l^{row} = H$ , $\sum_{k=1}^{n} d_k^{col} = W$ . + +QP formulation. Combining the zoom energy, the rigid energy and the linear constraint, we can formulate the grid manipulation as the following minimization task + +$$ +\underset {d _ {l} ^ {r o w}, d _ {k} ^ {c o l}} {\text {m i n i m i z e}} E = E _ {z o o m} + \lambda E _ {r i g i d} +$$ + +$$ +\text {s u b j e c t} \quad \sum_ {l = 1} ^ {m} d _ {l} ^ {\text {r o w}} = H, \sum_ {k = 1} ^ {n} d _ {k} ^ {\text {c o l}} = W \tag {9} +$$ + +where $\lambda$ is a hyper-parameter to balance the two energies. This convex objective function can be efficiently solved by a standard QP solver [20] (see Appendix A for more detailed derivations). + +# 4.3 Integration with Common Trackers + +Reference box generation. Our non-uniform resizing method needs a temporal prior reference box $r = (r^{cx}, r^{cy}, r^w, r^h)$ on the cropped source image $\mathcal{I}$ . During testing, $r$ is directly generated from the previous frame tracking result (see Sec. 3.1). During training, we only need to generate $r^w$ and $r^h$ from the ground truth width $g^w$ and height $g^h$ as the center of the temporal prior reference box is fixed at $r^{cx} = W / 2$ and $r^{cy} = H / 2$ . In detail, with probability $q$ we use a smaller jitter $j = j_s$ , and with probability $1 - q$ we use a larger jitter $j = j_l$ . $r^w$ and $r^h$ are calculated as $r^w = e^{J^w}g^w$ , $r^h = e^{J^h}g^h$ , where $J^w$ , $J^h \sim \mathcal{N}(0, j^2)$ . + +Label mapping. The classification loss of the network $\phi_{\theta}(\mathbf{z},\mathbf{x})$ is calculated on the target image $\mathcal{I}'$ . Therefore, the classification labels on $\mathcal{I}'$ should be generated from the ground truth bounding box on $\mathcal{I}'$ . The common practice is to use normalized ground truth bounding box for the reason that the normalized coordinates remain the same on $\mathcal{I}$ and $\mathcal{I}'$ under uniform resizing. However, this does not hold true for our non-uniform resizing. Therefore, we need to map the ground truth bounding box from $\mathcal{I}$ to $\mathcal{I}'$ via the grid $\mathcal{G}$ . Specifically, we map the top-left corner and bottom-right corner of the bounding box as follows. Given a point on the source image $(x,y)$ , we first find two grid points on $\mathcal{G}$ that are closest to $(x,y)$ . Then, the corresponding point $(x',y')$ can be obtained by bilinear interpolation between the indices of these two points. + +Reverse box mapping. The bounding box predicted by the network $\phi_{\theta}(\mathbf{z},\mathbf{x})$ is on $\mathcal{I}'$ and needed to be mapped back to $\mathcal{I}$ as the final tracking result. We map the top-left corner and bottom-right corner of the predicted bounding box from $\mathcal{I}'$ to $\mathcal{I}$ in a reversed process of label mapping. Specifically, given a point $(x',y')$ on $\mathcal{I}'$ , we first find the two grid points of $\mathcal{G}$ whose indices are closest to $(x',y')$ , and then the corresponding point $(x,y)$ can be obtained by bilinear interpolation between the values of these points. To minimize the impact of the reverse mapping, we directly calculate the regression loss on $\mathcal{I}$ instead of on $\mathcal{I}'$ . + +# 5 Experiments + +# 5.1 Implementation Details + +We apply our method to both one-stream tracker OSTrack [33] (denoted as OSTrack-Zoom) and CNN-Transformer tracker TransT [8] (denoted as TransT-Zoom). The proposed non-uniform resizing module is applied in both the training and testing stages on the search image. We do not apply the proposed non-uniform resizing on the template image because it degrades performance (see Sec. 5.3). The search patch size $(256 \times 256)$ is the same as the baseline methods, whereas the context factor is enlarged from 4 to 5. We use a controllable grid $\mathfrak{g}$ with shape $17 \times 17 (m = n = 16)$ . The importance score map is generated by a Gaussian function with bandwidth $\beta = 64$ . We set magnification factor $\gamma = 1.5$ and $\lambda = 1$ in Eq. (9). We use smaller jitter $j_{s} = 0.1$ with probability 0.8 and larger jitter $j_{l} = 0.5$ with probability 0.2. Notably, we use the same set of parameters for both OSTrack-Zoom and TransT-Zoom to demonstrate the generalization capability of our method. Except for the resizing operation, all the other training and testing protocols follow the corresponding baselines. The trackers are trained on four NVIDIA V100 GPUs. The inference speed and MACs are evaluated on a platform with Intel i7-11700 CPU and NVIDIA V100 GPU. + +# 5.2 State-of-the-art Comparison + +We compare our method with the state-of-the-art trackers on five challenging large-scale datasets: GOT-10k [16], LaSOT [11], LaSOText [12], TNL2K [30] and TrackingNet [23]. + +Table 1: State-of-the-art Comparison on GOT-10k, LaSOT, $\mathrm{LaSOT}_{ext}$ , TNL2K and TrackingNet. + +
TrackerSizeGOT-10kLaSOT\( LaSOT_{ext} \)TNL2KTrackingNetMACs (G)FPS
AO\( SR_{0.5} \)\( SR_{0.75} \)AUCPAUCPAUCPSUCP
Baseline &OursOSTrack-Zoom25673.583.670.070.276.250.557.456.557.383.282.221.5100
OSTrack-256[33]25671.080.468.269.175.247.453.354.3-83.182.021.5119
TransT-Zoom25567.577.661.367.171.646.852.953.762.381.880.219.245
TransT[8]25567.176.860.964.969.044.852.550.751.781.480.319.248
Speed-orientedSwinTrack-T[19]22471.381.964.567.270.847.653.953.053.281.178.46.498
SimTrack-B[7]22468.678.962.469.3---54.853.882.3-25.040
MixFormer-22k[9]32070.780.067.869.274.7----83.181.623.025
ToMP-101[21]288---68.573.545.9---81.578.9-20
TransInMo*[14]255---65.770.7--52.052.781.7-16.934
Stark-ST101[31]32068.878.164.167.1-----82.0-28.032
AutoMatch[35]25565.276.654.358.359.937.643.047.243.576.072.6-50
DiMP[4]28861.171.749.256.956.739.245.144.743.474.068.75.440
SiamRPN++[18]25551.761.632.549.649.134.039.641.341.273.369.47.835
Performance-orientedOSTrack-384[33]38473.783.270.871.177.650.557.655.9-83.983.248.361
SwinTrack-B[19]38472.480.567.871.376.549.155.655.957.184.082.869.745
SimTrack-L[7]22469.878.866.070.5---55.655.783.4-95.4-
MixFormer-L[9]320---70.176.3----83.983.1127.818
+ +GOT-10k [16] provides training and test splits with zero-overlapped object classes. Trackers are required to be only trained on GOT-10k training split and then evaluated on the test split to examine their generalization capability. We follow this protocol and report our results in Tab. 1. Our OSTrack-Zoom achieves $73.5\%$ AO which surpasses the baseline OSTrack by $2.5\%$ . And our TransT-Zoom achieves $67.5\%$ AO which surpasses the baseline TransT by $0.4\%$ . Compared with previous speed-oriented trackers, our OSTrack-Zoom improves all of the three metrics in GOT-10k by a large margin, proving that our method has good generalization capability. + +LaSOT [11] is a large-scale dataset with 280 long-term video sequences. As shown in Tab. 1, our method improves the AUC of OSTrack and TransT by $1.1\%$ and $2.2\%$ respectively. Our method OSTrack-Zoom have the best performance among speed-oriented trackers on LaSOT, indicating that our method is well-suited for complex object motion in long-term visual tracking task. + +$\mathbf{LaSOT}_{ext}$ [12] is an extension of LaSOT with 150 additional videos containing objects from 15 novel classes outside of LaSOT. In Tab. 1, our method achieves $3.1\%$ and $2.0\%$ relative gains on AUC metric for OSTrack and TransT baselines respectively. Our method OSTrack-Zoom achieves promising $50.5\%$ AUC and $57.4\%$ Precision, outperforming other methods by a large margin. + +TNL2K [30] is a newly proposed dataset with new challenging factors like adversarial samples, modality switch, etc. As reported in Tab. 1, our method improves OSTrack and TransT by $2.2\%$ and $3.0\%$ on AUC. Our OSTrack-Zoom sets the new state-of-the-art performance on TNL2K with $56.5\%$ AUC while running at 100 fps. + +TrackingNet [23] is a large-scale short-term tracking benchmark. Our method boosts the performance of OSTrack and TransT by $0.1\%$ and $0.4\%$ SUC. The reason for the relatively small improvements is that, contrary to other datasets, TrackingNet has fewer sequences with scenarios that can benefit from an enlarged visual field, e.g.scenarios with drastic movement or significant drift after some period of time. Please refer to Appendix F for a detailed analysis. + +Comparison with performance-oriented trackers. We additionally compare our method with the performance-oriented versions of the SOTA trackers. On the one hand, on GOT-10k, $\mathrm{LaSOT}_{ext}$ , TNL2K whose test splits object classes have no or few overlap with training data, our OSTrack-Zoom is on-par with the SOTA tracker OSTrack-384, while consuming only $44.5\%$ MACs of the latter and running $50\%$ faster. On the other hand, on LaSOT and TrackingNet whose test splits have many object classes appearing in training data, our OSTrack-Zoom is slightly inferior to some SOTA trackers. A possible reason is that these SOTA trackers with heavier networks fit the training data better, which is + +Table 2: Comparison of accuracy, latency and target size using different resizing methods with OSTrack [33]. + +
#MethodSizeTNL2K [30]LaSOT [11]Target Size (103)Latency (ms)FPS
AUCPAUCPavg↑std↓resizenet
Uniform25655.555.469.575.22.724.80.048.40119
Uniform32056.156.869.976.24.238.80.0410.9292
FOVEA25654.254.567.673.34.440.83.378.4085
Ours25656.557.370.276.24.925.01.588.40100
+ +Table 3: Ablation of hyper-parameters on LaSOT [11] with OSTrack [33]. + +
(a) Bandwidth β
β464256
AUC69.170.269.6
+ +
(c) Zoom factor γ
γ1.251.51.75
AUC69.770.269.4
+ +
(b) Control grid size
Size81632
AUC66.470.269.5
+ +
(d) Balance factor λ
λ012
AUC69.870.269.6
+ +Table 4: Ablation on the effect of applying non-uniform resizing in training and testing stages. Results are reported in AUC (%) metric. + +
#Non-uniform resizingOSTrack[33]TransT[8]
TestingTrainingLaSOT[11]TNL2K[30]LaSOT[11]TNL2K[30]
69.154.364.950.7
69.155.966.353.6
70.256.567.153.7
+ +Table 5: Ablation on the effect of applying non-uniform resizing on the template and search images with OSTrack [33]. + +
#Non-uniform ResizingLaSOT[11]
TemplateSearchAUCP
64.871.0
70.276.2
69.675.5
+ +beneficial to track objects with classes that have been seen during training. Nevertheless, the gap between speed-oriented and performance-oriented trackers is narrowed using our method. + +# 5.3 Ablation Experiment + +Comparison with other resizing methods. As shown in Tab. 2, we compare our non-uniform resizing method (4) with uniform resizing with small/large input sizes $(1/2)$ and FOVEA [27] non-uniform resizing (3). On both TNL2K and LaSOT, our method outperforms other methods. Notably, while applying FOVEA causes a significantly degraded performance $(1vs.3)$ , our method can achieve large performance gains $(1vs.4)$ . To understand the reason behind this, we calculate the average target size (avg) and the standard deviation of the target size (std), which represent the degree of the targets' scale change, on LaSOT. Compared with uniform resizing $(1vs.4)$ , our method achieves $1.34^2 \times$ magnification on average target size, while the standard deviation almost keeps the same. The magnification is close to the desired $\gamma^2 = 1.5^2$ , showing that our method achieves controllable magnification. In contrast to FOVEA $(3vs.4)$ , which severely reduces the tracking performance, our standard deviation is almost unchanged, demonstrating that our method can avoid extreme deformation that severely changes the scale of the target. + +Computational overhead analysis. Our resizing module runs purely on CPU. The latency for resizing an image is $1.58\mathrm{ms}$ on an Intel i7-11700 CPU. Compared with uniform resizing (①vs.④), our method introduces an additional $1.54\mathrm{ms}$ latency during resizing, causing a slightly slower speed. However, our performance is better than the uniform resizing counterparts. Especially when compared with uniform resizing with a larger input size (②vs.④), our method reaches higher performance while running faster, demonstrating the superiority of using non-uniform resizing in visual tracking. Our method is also faster than FOVEA resizing while having a better performance. Moreover, the MACS for our resizing an image is $1.28\mathrm{M}$ MACS $(10^{6})$ , which is negligible compared to the network inference which is usually tens of G MACs $(10^{9})$ . + +Resizing in different stages. It is worth noting that our method can be directly applied to off-the-shell tracking models, i.e., only applying non-uniform resizing method at testing time. Results on LaSOT [11] and TNL2K [30] are reported in Tab. 4 using the AUC (\%) metric. As shown in Tab. 4, directly applying our method to the off-the-shelf trackers only in testing can already improve the tracking accuracy (①vs.②). Further applying our resizing in training can also boost the performance, owning to the aligned training and testing protocols (②vs.③). + +Resizing template and search images separately. We also investigate the effects of applying the non-uniform resizing on the template and search images separately. As shown in Tab. 5 either applying non-uniform resizing on the template image alone or on both the template and search images performs worse than applying it only on the search image. We think the reason is that the template + +![](images/925215b924e5382e61f39ea16f212ad7a9548014910c7ee5f135a39a4207ae72.jpg) +Figure 4: Visualization of different resizing. For each set of images, from left to right: ours, uniform resizing, FOVEA [27] resizing. The targets are marked in red box. (left) Our method can magnify the target without drastically changing the appearance of the target. (right) Our method can magnify the target without significantly changing the aspect ratio of the target. + +![](images/eb66db3f051c76cfeca6000bceee550d05ae65f3ca5c982ff700503575ed16bc.jpg) + +image must have a relatively small context factor to contain minimal necessary context, which makes the whole template region important for the tracking network to locate the target. Therefore, the tracker cannot benefit from non-uniform resizing of the template image. + +More ablations. First, we study the impact of the bandwidth $\beta$ used in the importance function. A smaller $\beta$ means less area is considered to be important and vice versa. As shown in Tab. 3a, the ideal value for $\beta$ is 64. Both smaller bandwidth (4) and larger bandwidth (256) cause reduced performance, showing the importance of this factor. Second, we also analyze the impact of the controllable grid. As shown in Tab. 3b, $m = n = 16$ is the best size for this grid. A smaller grid size causes a drastic drop in AUC, showing that too small grid size will cause inaccurate resizing that damages the tracking performance. A larger grid size also degrades performance, for which we believe the reason is that a large grid size will lead to an over-reliance on the important area generation based on the temporal prior, which is however not always correct. Third, we further look into the impact of different zoom factors $\gamma$ . As shown in Tab. 3c, 1.5 is the best value for $\gamma$ . A smaller $\gamma$ will degrade performance due to smaller target resolution. Meanwhile, a larger $\gamma$ is also harmful to the performance as larger magnification will cause trouble when trying to rediscover the lost target. Finally, we study the impact of $E_{rigid}$ by changing the balance factor $\lambda$ . As shown in Tab. 3d, setting $\lambda = 1$ reaches the best performance. When $\lambda = 0$ , the performance drops, demonstrating the effectiveness of the proposed $E_{energy}$ . A larger $\lambda = 2$ also degrades the performance, indicating that too much constraint is harmful to the performance. + +Visualization. We visualize our proposed non-uniform resizing, uniform resizing and FOVEA [27] resizing in Fig. 4. Compared with uniform resizing, our method can improve the resolution of the target to retain more raw information. Compared with FOVEA, our method can better preserve the aspect ratio and the appearance of the target. + +# 6 Conclusion + +In this paper, we presented a novel non-uniform resizing method for visual tracking, which efficiently improves the tracking performance by simultaneously attending to a larger visual field and retaining more raw information with high resolution. Inspired by the HVS data processing with limited resources, our method formulates the non-uniform resizing as an explicitly controlled magnification of important areas with restriction on extreme deformation. Extensive experiments have demonstrated that our method can bridge the gap between speed-oriented and performance-oriented trackers with negligible computational overhead. + +Limitations. One limitation of our method is that we need a fixed context factor to determine the size of the visual field. It would be interesting to develop a method to dynamically adjust the size of the visual field according to the tracking trajectory or appearance of the target in future work. + +Acknowledgements. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was supported in part by the National Key R&D Program of China (Grant No. 2020AAA0106800), the Natural Science Foundation of China (Grant No. U22B2056, 61972394, 62036011, 62192782, 61721004, U2033210), the Beijing Natural Science Foundation (Grant No. L223003, JQ22014, 4234087), the Major Projects of Guangdong Education Department for Foundation Research and Applied Research (Grant No. 2017KZDXM081, 2018KZDXM066), the Guangdong Provincial University Innovation Team Project (Grant No. 2020KCXTD045). Jin Gao and Bing Li were also supported in part by the Youth Innovation Promotion Association, CAS. + +# References + +[1] Shai Avidan and Ariel Shamir. Seam carving for content-aware image resizing. In SIGGRAPH. 2007. +[2] Babak Ehteshami Bejnordi, Amirhossein Habibian, Fatih Porikli, and Amir Ghodrati. SALISA: Saliency-based input sampling for efficient video object detection. In ECCV, 2022. +[3] Luca Bertinetto, Jack Valmadre, João F. Henriques, Andrea Vedaldi, and Philip H. S. Torr. Fully convolutional siamese networks for object tracking. In ECCV Workshops, 2016. +[4] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Learning discriminative model prediction for tracking. In ICCV, 2019. +[5] Philippe Blatter, Menelaos Kanakis, Martin Danelljan, and Luc Van Gool. Efficient visual tracking with exemplar transformers. In WACV, 2023. +[6] Vasil Borsuk, Roman Vei, Orest Kupyn, Tetiana Martyniuk, Igor Krashenyi, and Jiri Matas. FEAR: Fast, efficient, accurate and robust visual tracker. In ECCV, 2022. +[7] Boyu Chen, Peixia Li, Lei Bai, Lei Qiao, Qiuhong Shen, Bo Li, Weihao Gan, Wei Wu, and Wanli Ouyang. Backbone is all your need: A simplified architecture for visual object tracking. In ECCV, 2022. +[8] Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, and Huchuan Lu. Transformer tracking. In CVPR, 2021. +[9] Yutao Cui, Cheng Jiang, Limin Wang, and Gangshan Wu. MixFormer: End-to-end tracking with iterative mixed attention. In CVPR, 2022. +[10] Yutao Cui, Cheng Jiang, Gangshan Wu, and Limin Wang. MixFormer: End-to-end tracking with iterative mixed attention. arXiv preprint arXiv: 2302.02814, 2023. +[11] Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, and Haibin Ling. LaSOT: A high-quality benchmark for large-scale single object tracking. In CVPR, 2019. +[12] Heng Fan, Hexin Bai, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Mingzhen Huang, Juehuan Liu, Yong Xu, et al. LaSOT: A high-quality large-scale single object tracking benchmark. International Journal of Computer Vision, 129:439-461, 2021. +[13] Benjamin Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, and Matthijs Douze. LeViT: A vision transformer in convnet's clothing for faster inference. In ICCV, 2021. +[14] Mingzhe Guo, Zhipeng Zhang, Heng Fan, Liping Jing, Yilin Lyu, Bing Li, and Weiming Hu. Learning target-aware representation for visual tracking via informative interactions. In *IJCAI*, 2022. +[15] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. Masked autoencoders are scalable vision learners. In CVPR, 2022. +[16] Lianghua Huang, Xin Zhao, and Kaiqi Huang. GOT-10k: A large high-diversity benchmark for generic object tracking in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(5): 1562–1577, 2019. +[17] Ben Kang, Xin Chen, Dong Wang, Houwen Peng, and Huchuan Lu. Exploring lightweight hierarchical vision transformers for efficient visual tracking. In ICCV, 2023. +[18] Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. SiamRPN++: Evolution of siamese visual tracking with very deep networks. In CVPR, 2019. +[19] Liting Lin, Heng Fan, Zhipeng Zhang, Yong Xu, and Haibin Ling. SwinTrack: A simple and strong baseline for transformer tracking. In NeurIPS, 2022. +[20] Martin S. Andersen, Joachim Dahl, and Lieven Vandenberghe. CVXOPT: A python package for convex optimization (version: 1.3.0), 2022. URL http://cvxopt.org/index.html. +[21] Christoph Mayer, Martin Danelljan, Goutam Bhat, Matthieu Paul, Danda Pani Paudel, Fisher Yu, and Luc Van Gool. Transforming model prediction for tracking. In CVPR, 2022. +[22] Matthias Mueller, Neil Smith, and Bernard Ghanem. A benchmark and simulator for UAV tracking. In ECCV, 2016. + +[23] Matthias Müller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, and Bernard Ghanem. TrackingNet: A large-scale dataset and benchmark for object tracking in the wild. In ECCV, 2018. +[24] Daniele Panozzo, Ofir Weber, and Olga Sorkine. Robust image retargeting via axis-aligned deformation. In EUROGRAPHICS, 2012. +[25] Adrià Recasens, Petr Kellnhofer, Simon Stent, Wojciech Matusik, and Antonio Torralba. Learning to zoom: A saliency-based sampling layer for neural networks. In ECCV, 2018. +[26] Jianbing Shen, Yuanpei Liu, Xingping Dong, Xiankai Lu, Fahad Shahbaz Khan, and Steven Hoi. Distilled siamese networks for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44 (12):8896-8909, 2021. +[27] Chittesh Thavamani, Mengtian Li, Nicolas Cebron, and Deva Ramanan. FOVEA: Foveated image magnification for autonomous navigation. In ICCV, 2021. +[28] Chittesh Thavamani, Mengtian Li, Francesco Ferroni, and Deva Ramanan. Learning to zoom and unzoom. In CVPR, 2023. +[29] Shaoru Wang, Jin Gao, Zeming Li, Xiaogin Zhang, and Weiming Hu. A closer look at self-supervised lightweight vision transformers. In ICML, 2023. +[30] Xiao Wang, Xiujun Shu, Zhipeng Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, and Feng Wu. Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark. In CVPR, 2021. +[31] Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, and Huchuan Lu. Learning spatio-temporal transformer for visual tracking. In ICCV, 2021. +[32] Bin Yan, Houwen Peng, Kan Wu, Dong Wang, Jianlong Fu, and Hutchuan Lu. LightTrack: Finding lightweight neural networks for object tracking via one-shot architecture search. In CVPR, 2021. +[33] Botao Ye, Hong Chang, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Joint feature learning and relation modeling for tracking: A one-stream framework. In ECCV, 2022. +[34] Zhipeng Zhang and Houwen Peng. Deeper and wider siamese networks for real-time visual tracking. In CVPR, 2019. +[35] Zhipeng Zhang, Yihao Liu, Xiao Wang, Bing Li, and Weiming Hu. Learn to match: Automatic matching network design for visual tracking. In ICCV, 2021. +[36] Li Zhaoping. A new framework for understanding vision from the perspective of the primary visual cortex. Current opinion in neurobiology, 58:1-10, 2019. +[37] Heliang Zheng, Jianlong Fu, Zheng-Jun Zha, and Jiebo Luo. Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition. In CVPR, 2019. + +# Appendices + +# A Derivations of the QP-based Minimization Task for Grid Manipulation + +The minimization task for grid manipulation in our non-uniform resizing is defined as + +$$ +\underset {d _ {l} ^ {r o w}, d _ {k} ^ {c o l}} {\text {m i n i m i z e}} \quad E = E _ {z o o m} + \lambda E _ {r i g i d} +$$ + +$$ +\text {s u b j e c t} \quad \sum_ {l = 1} ^ {m} d _ {l} ^ {\text {r o w}} = H, \sum_ {k = 1} ^ {n} d _ {k} ^ {\text {c o l}} = W \tag {A1} +$$ + +where the zoom energy $E_{zoom}$ is defined as + +$$ +E _ {z o o m} = \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} S _ {k, l} ^ {2} \left(\left(d _ {l} ^ {r o w} - \frac {1}{\gamma} \frac {H}{m}\right) ^ {2} + \left(d _ {k} ^ {c o l} - \frac {1}{\gamma} \frac {W}{n}\right) ^ {2}\right), \tag {A2} +$$ + +and the rigid energy $E_{\mathrm{rigid}}$ is defined as + +$$ +E _ {r i g i d} = \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} S _ {k, l} ^ {2} \left(\frac {m}{H} d _ {l} ^ {r o w} - \frac {n}{W} d _ {k} ^ {c o l}\right) ^ {2}. \tag {A3} +$$ + +This minimization task in Eq. (A1) can be efficiently solved using a standard QP solver [20], which requires converting the above minimization task into the general form of QP, i.e., + +$$ +\underset {d} {\text {m i n i m i z e}} \quad E = \frac {1}{2} d ^ {\top} P d + q ^ {\top} d \tag {A4} +$$ + +$$ +\text {s u b j e c t} \quad A d = b +$$ + +where $d = (d_1^{row},\dots,d_m^{row},d_1^{col},\dots,d_n^{col})^\top \in \mathbb{R}^{m + n}$ is the unknown grid intervals to be optimized. $P\in \mathbb{R}^{(m + n)\times (m + n)}$ and $q\in \mathbb{R}^{m + n}$ can be derived from the energy function $E = E_{zoom} + \lambda E_{rigid}$ . $A\in \mathbb{R}^{2\times (m + n)}$ and $b\in \mathbb{R}^2$ can be derived from the linear constraint in Eq. (A1). + +Converting the energy function into the matrix form. Denote $S_{k}^{col} = \sum_{l=1}^{n} S_{k,l}^{2}$ , and $S_{l}^{row} = \sum_{k=1}^{m} S_{k,l}^{2}$ , the overall energy $E$ can be expanded as + +$$ +\begin{array}{l} E = \sum_ {l = 1} ^ {m} \mathcal {S} _ {l} ^ {r o w} \left(\lambda \left(\frac {m}{H}\right) ^ {2} + 1\right) d _ {l} ^ {r o w 2} + \sum_ {k = 1} ^ {n} \mathcal {S} _ {k} ^ {c o l} \left(\lambda \left(\frac {n}{W}\right) ^ {2} + 1\right) d _ {k} ^ {c o l 2} \\ - \sum_ {l = 1} ^ {m} 2 \mathcal {S} _ {l} ^ {\text {r o w}} \frac {H}{\gamma m} d _ {l} ^ {\text {r o w}} - \sum_ {k = 1} ^ {n} 2 \mathcal {S} _ {k} ^ {\text {c o l}} \frac {W}{\gamma n} d _ {k} ^ {\text {c o l}} \tag {A5} \\ - \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} 2 \lambda S _ {k, l} ^ {2} \frac {m n}{H W} d _ {l} ^ {r o w} d _ {k} ^ {c o l} + C \\ \end{array} +$$ + +where $C$ is a constant term that can be ignored during minimization. The first, second and third lines of Eq. (A5) are consisting of squared terms, linear terms and cross terms respectively. + +Thus, the overall energy $E$ can be converted into the matrix form described in Eq. (A4) by fitting the above terms into $P \in \mathbb{R}^{(m + n) \times (m + n)}$ and $q \in \mathbb{R}^{m + n}$ , i.e., + +$$ +P = \left( \begin{array}{c c} P ^ {r o w} & P ^ {c r o s s} \\ (P ^ {c r o s s}) ^ {\top} & P ^ {c o l} \end{array} \right), \quad q = \left( \begin{array}{c} q _ {r o w} \\ q _ {c o l} \end{array} \right) \tag {A6} +$$ + +where $P^{row} \in \mathbb{R}^{n \times n}$ and $P^{col} \in \mathbb{R}^{m \times m}$ are diagonal matrices accounting for the squared terms in $E$ . $P^{row}$ and $P^{col}$ can be defined as + +$$ +P _ {k, l} ^ {r o w} = \left\{ \begin{array}{l} 2 \mathcal {S} _ {l} ^ {r o w} \left(\lambda \left(\frac {m}{H}\right) ^ {2} + 1\right), k = l \\ 0, \text {o t h e r s} \end{array} , \quad P _ {k, l} ^ {c o l} = \left\{ \begin{array}{l} 2 \mathcal {S} _ {k} ^ {c o l} \left(\lambda \left(\frac {n}{W}\right) ^ {2} + 1\right), k = l \\ 0, \text {o t h e r s} \end{array} . \right. \right. \tag {A7} +$$ + +$P^{cross} \in \mathbb{R}^{m \times n}$ is a matrix accounting for the cross terms in $E$ , while $q^{row} \in \mathbb{R}^n$ and $q^{col} \in \mathbb{R}^m$ account for the linear terms. $P^{cross}$ , $q^{row}$ and $q^{col}$ can be defined as + +$$ +P _ {k, l} ^ {c r o s s} = - 2 \lambda S _ {k, l} ^ {2} \frac {m n}{H W}, \quad q _ {l} ^ {r o w} = - 2 S _ {l} ^ {r o w} \frac {H}{\gamma m}, \quad q _ {k} ^ {c o l} = - 2 S _ {k} ^ {c o l} \frac {W}{\gamma n}. \tag {A8} +$$ + +Converting the linear constraint into the matrix form. The linear constraint can fit in the matrix form described in Eq. (A1) by defining $A \in \mathbb{R}^{2 \times (m + n)}$ and $b \in \mathbb{R}^2$ as + +$$ +A _ {k, l} = \left\{ \begin{array}{l} 1, k = 1, l = 1, \dots , m \\ 1, k = 2, l = m + 1, \dots , n + n \\ 0, o t h e r s \end{array} , \quad b = (H, W) ^ {\top}. \right. \tag {A9} +$$ + +Proof of the convexity. When the matrix $P$ in Eq. (A6) is positive semi-definite, the energy function in Eq. (A1) is convex and a global minimization can be found. Apparently, $P$ is positive semi-definite since, for any $d \in \mathbb{R}^{m + n}$ , + +$$ +\begin{array}{l} d ^ {\top} P d = 2 \left(\sum_ {l = 1} ^ {m} \mathcal {S} _ {l} ^ {r o w} \left(\lambda \left(\frac {m}{H}\right) ^ {2} + 1\right) d _ {l} ^ {r o w 2} + \sum_ {k = 1} ^ {n} \mathcal {S} _ {k} ^ {c o l} \left(\lambda \left(\frac {n}{W}\right) ^ {2} + 1\right) d _ {k} ^ {c o l 2} \right. \\ \left. - \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} 2 \lambda S _ {k, l} ^ {2} \frac {m n}{H W} d _ {l} ^ {r o w} d _ {k} ^ {c o l}\right) \tag {A10} \\ = 2 \left(\sum_ {l = 1} ^ {m} \mathcal {S} _ {l} ^ {r o w} d _ {l} ^ {r o w 2} + \sum_ {k = 1} ^ {n} \mathcal {S} _ {k} ^ {c o l} d _ {k} ^ {c o l} ^ {2} + \lambda \sum_ {l = 1} ^ {m} \sum_ {k = 1} ^ {n} S _ {k, l} ^ {2} \left(\frac {m}{H} d _ {l} ^ {r o w} - \frac {n}{W} d _ {k} ^ {c o l}\right) ^ {2}\right) \\ \geq 0 \\ \end{array} +$$ + +# B Attribute-based Analysis on LaSOT + +We further analyze the AUC gains brought by our method on the two baselines, i.e., OSTrack and TransT, based on the attributes of the videos on LaSOT [11]. The results are displayed in Fig. A1. + +![](images/9eec5eb3852e3ac2f3d3b23875718d51059abc5bf85779a0f57d4061c4e13575.jpg) +Figure A1: Attribute-based analysis of AUC gains on LaSOT [11] + +As shown in Fig. A1, our method significantly improves the performance of both baselines under challenging scenarios with drastic movement (see Fast motion (FM), Viewpoint change (VC), etc) or + +significant drift after some period of time (see Full Occlusion (FOC), Out-of-View (OV), etc), owning to the enlarged visual field. Performance on videos with low resolution (LR) is also considerably improved thanks to the zoom-in behavior. The deformation caused by non-uniform resizing may cause a little performance degradation when encountering background clutter (BC) or deformation (DEF). + +# C Comparison Between Non-uniform Resizing and Optimized Uniform Resizing Baseline + +To demonstrate the superiority of our method, we additionally explore whether optimizing the crop hyper-parameters in the uniform resizing baseline can have a similar effect to our proposed non-uniform resizing, the results are shown in Tab. A1. + +Despite the fact that our limited compute resources make us unable to optimize the above hyperparameters on large tracking datasets in a grid-search manner, our adequately powered experiments clearly show that the above optimizing is indeed hardly able to achieve similar effects to our resizing. Even for the baseline OSTrack-256[33] with the optimal crop hyper-parameters on LaSOT[11] (69.5 AUC), we experimentally find that it can only achieve AUC gains of $+0.4$ and $+0.6$ on $\mathrm{LaSOT}_{ext}[12]$ and TNL2K[30] respectively while performing worse than the original OSTrack-256 on TrackingNet[23] (83.1 SUC vs 82.1 SUC). In other words, our method still outperforms the above optimal baseline on LaSOT ( $+0.7$ AUC), $\mathrm{LaSOT}_{ext}$ ( $+2.7$ AUC), TNL2K ( $+1.6$ AUC), TrackingNet ( $+1.1$ SUC). Moreover, it is experimentally shown that the mismatch of the object size between the search and template images caused by increasing the search-image context factor has a relatively small effect on the final tracking accuracy for modern transformer trackers. That means the low resolution of the object is the major cause for the worse accuracy. + +Table A1: Comparison results of our method and the uniform resizing baseline with its crop hyperparameters varied (i.e., setting $c^w$ and $c^h$ using two different choices as in Sec. 3.1, varying the search-image context factor, and varying the template-image size to alleviate the mismatch of the object size between the search and temple images). The experiments are based on OSTrack-256[33] and tested on LaSOT[11]. The context factor and size of search/template images are displayed as context factor/size. The choices of $c^w$ , $c^h$ : $\mathbf{1} c^w = b^w$ , $c^h = b^h$ and $\mathbf{2} c^w = c^h = (b^w + b^h) / 2$ + +
Originalcw, chObject Size MismatchedObject Size MatchedOurs
Search4/2564/2564.5/2565/2565.5/2566/2564.5/2565/2565.5/2566/2565/256
Template2/1282/1282/1282/1282/1282/1282/1142/1022/932/852/128
cw, ch12111111111
AUC69.168.569.069.568.267.569.168.868.067.470.2
+ +# D More Experimental Results on UAV123 + +In this section, we report the performance comparison on an additional benchmark, i.e., UAV123 [22]. UAV123 is an aerial tracking benchmark captured from low-altitude UAVs with more small objects and larger variations in bounding box size and aspect ratio. As shown in Tab. A2, our method achieves $1.0\%$ and $0.6\%$ relative gains on AUC for the OSTrack [33] and TransT [8] baselines, demonstrating that our method can improve the tracking performance in a wide range of tracking scenarios. + +Table A2: Comparison with state-of-the-art trackers on UAV123[22] + +
DatasetOSTrack-ZoomOSTrack [33]TransT-ZoomTransT [8]MixFormer-L [9]ToMP [21]DiMP [4]SiamFC [3]
AUC (%)69.368.369.769.169.569.065.337.7
+ +# E More Experimental Results with SwinTrack as the Baseline + +We further apply our method to both the performance-oriented version SwinTrack-B (backbone: Swin Transformer-Base) and speed-oriented version SwinTrack-T (backbone: Swin Transformer-Tiny) of SwinTrack [19] to experimentally show the generalization capability of our proposed resizing method for a wide range of visual tracking algorithms. Note that we use the v1 version of SwinTrack (without motion token) as the baseline because only the code of v1 version is publicly available. + +As shown in Tab. A3, our method can bring consistent improvement for both SwinTrack-T and SwinTrack-B. Particularly worth mentioning is that our SwinTrack-B-Zoom can even outperforms SwinTrack-B-384 with a much smaller input size. + +Table A3: Further validation of applying our non-uniform resizing to both the performance-oriented version (backbone: Swin Transformer-Base) and speed-oriented version (backbone: Swin Transformer-Tiny) of SwinTrack [19]. Note that we use the v1 version of SwinTrack (without motion token) as the baseline because only the code of v1 version is publicly available. + +
TypeTrackersSizeLaSOT[11]
AUCP
Speed-orientedSwinTrack-T-Zoom22468.572.9
SwinTrack-T[19]22466.770.6
Performance-orientedSwinTrack-B-Zoom22470.575.4
SwinTrack-B[19]22469.674.1
SwinTrack-B-384[19]38470.275.3
+ +# F Analysis on Small Improvement Over TrackingNet + +To find out the reason for the small improvement over TrackingNet [23], we additionally analyze the ratios of challenging scenarios appearing in the different tracking datasets (see Tab. A4), which clearly show that the test split of TrackingNet has a very different property from LaSOT [11], $\mathrm{LaSOT}_{ext}$ [12], and TNL2K [30] with respect to the ratios of challenging scenarios. As shown in Fig. A1, our method can achieve significant performance gains under challenging scenarios with drastic movement (see Fast Motion (FM), Viewpoint Change (VC), etc.) or significant drift after some period of time (see Full Occlusion (FOC), Out-of-View (OV), etc.) owing to the enlarged visual field. However, the ratios of the challenging scenarios FOC, OV, FM in TrackingNet (VC is not labeled in TrackingNet) are significantly lower than in LaSOT, $\mathrm{LaSOT}_{ext}$ , and TNL2K, which may be the reason for small improvement of our approach on TrackingNet. + +Table A4: Ratios of challenging scenarios appearing in the different tracking datasets. Our method can achieve significant performance gains under challenging scenarios with drastic movement (e.g., Fast Motion (FM), Viewpoint Change (VC)) or significant drift after some period of time (e.g., Full Occlusion (FOC), Out-of-View (OV)) owning to the enlarged visual field. Note that VC is not labeled in TrackingNet. + +
DatasetsRatios of Challenging ScenariosAUC Gains
Significant DriftDrastic MotionOSTrack[33]TransT[8]
FOCOVFMVC
LaSOT[11]42.1%37.1%18.9%11.8%+1.1%+2.2%
LaSOText[12]62.7%21.3%58.7%39.3%+3.1%+2.0%
TNL2K[30]13.9%33.4%24.0%44.3%+2.2%+3.0%
TrackingNet[23]4.7%4.0%10.0%-+0.1%+0.4%
+ +# G Detailed Latency of Non-uniform Resizing + +We provide the latency analysis in terms of time (ms) and MACs for solving QP and resizing images in Tab. A5. It can be seen that resizing images with interpolations in our method costs about half of the total time in spite of its parallel processing, while iteratively solving QP costs the rest of the total time despite its much lower FLOPs. + +Table A5: Detailed latency of non-uniform resizing + +
ComponentTime(ms)MACs
Solving QP0.890.16M
Resizing0.691.12M
Total1.581.28 M
+ +# H Confidence Interval for Ablation Experiments of Hyper-parameters + +We calculate the $90\%$ confidence intervals for all the experiments in Tab. 3 using bootstrap sampling. Each of the $90\%$ confidence intervals is obtained using 10000 bootstrap samples, each sample with size 280 (same as the video number of LaSOT-test-split). The results are listed in Tab. A6. + +Table A6: 90% confidence intervals for ablation experiments of hyper-parameters. + +
(a) Bandwidth β(b) Control grid size
β464256Size81632
Upper70.95 (+1.87)72.02 (+1.86)71.54 (+1.90)Upper68.50 (+2.06)72.02 (+1.86)71.52 (+1.98)
Mean69.0870.1669.64Mean66.4470.1669.54
Lower67.23 (-1.85)68.33 (-1.83)67.80 (-1.84)Lower64.49 (-1.95)68.33 (-1.83)67.63 (-1.91)
(c) Zoom factor γ(d) Balance factor λ
γ1.251.51.75λ012
Upper71.58 (+1.89)72.02 (+1.86)71.35 (+1.94)Upper71.69 (+1.89)72.02 (+1.86)71.43 (+1.87)
Mean69.6970.1669.41Mean69.8070.1669.56
Lower67.88 (-1.81)68.33 (-1.83)67.49 (-1.92)Lower67.93 (-1.87)68.33 (-1.83)67.71 (-1.85)
+ +# I More Visualization Results + +We visualize the results of our method OSTrack-Zoom and its corresponding baseline on videos with challenging attributes to qualitatively show the superiority of our method in Fig. A2. Thanks to the zoom-in behavior and the enlarged visual field, our method can better handle the challenges such as full occlusion, fast motion, etc. + +As shown in the left part of Fig. A2, at frame #150, both our method and the baseline lose track of the target since the target is fully occluded. At frame #157, our method quickly re-detects the target as the target re-appears. Note the target is significantly larger in the resized input image when using our resizing method, which indicates that the zoom-in behavior allowed by our method helps the re-detection of the target. At frame #187, our method successfully tracks the target whereas the baseline still drifts to a similar car. + +As shown in the right part of Fig. A2, at frame #15, the target is moving really fast. As a result, in the resized input image generated by the baseline, part of the target is out of view which causes an inaccurate localization. In contrast, the resized input image generated by our method keeps the entire target visible thanks to the enlarged visual field, resulting in a more accurate tracking result. The situation is similar in frame #27, where the baseline tracks a fully-preserved distractor rather than + +the half-cropped target, whereas our method successfully tracks the target since the target is intact. At frame #32, the baseline completely lose track of the target, while our method successfully tracks it. + +![](images/2200d8a4e65f66936a6c6719e456926aa93f6115c0eaf7dce03a7c31ae238217.jpg) + +![](images/e3f0d502788d4422d6e365d51aa78f210c14626e80a707b0475582f5c23dcf74.jpg) + +![](images/5538b9f526b151ffae84a4eb59162dca2bd09bb1427cce4d49b11d34f8e3e013.jpg) + +![](images/e7622b402803be6262565e9074439e142f2d521ea75e3390e33b2c782b530820.jpg) + +![](images/a2f68a11550e2f11838cee6a597257f4d16e780ef15085d3eaa0035ddba9dbbe.jpg) +Ground Truth OSTrack-Zoom OSTrack + +![](images/04865b7ccad0b7f66b6f919a2ed60caea73895411ce01f73fc4a605537921c27.jpg) +Figure A2: Visualization of tracking results on videos with challenging attributes. Input to the tracker is displayed on the right side of each full image, where the image with blue box is the input image resized by our method and the image with green box is the input image resized by the baseline. (left) An example of video with full occlusion. The zoom-in behavior allows our method to quickly re-detect the target and avoid drifting. (right) An example of video with fast motion. The enlarged visual field allows our tracker to have the whole target visible without cropping, which facilitates localization and avoids drifting to similar objects when the target has drastic motion. + +# J Failure Cases + +We visualize the failure cases of our method by comparing the tracking results of OSTrack-Zoom (blue) and OSTrack (green) in Fig. A3. + +As shown in the left part of Fig. A3, at frame #1108, the target is out of view. As our method enlarges the visual field, apart from the distractor (orange ball) at the center of the input, an additional distractor (dark green ball at the top-right of the input image) comes into view, which causes the tracker to predict a large box between the two distractors. The localization error quickly accumulates to an overly enlarged visual field at frame #1117, since the visual field is partly determined by the previous prediction. At this time, a white ball that is very similar to the template is brought into view, causing our method to drift to the far-away white ball. Then, at frame #1130, when the target re-appears, our method cannot re-detect the target as our method drifts to a far-away distractor, which makes the real target out of the visual field of our tracker. + +As shown in the right part of Fig. A3, at frame #94, both our method and the baseline successfully track the target. However, at frame #104, the enlarged visual field from our method brought additional contexts (orange robotic arm on the left) into view. The little fraction of orange robotic arm on the left makes the distractor look like a horizontally flipped template which has a small fraction of orange robotic arm on the right. As a result, our method drifts to the distractor. Then, at frame #114, our + +method fails to track back to the squeezed real target since it is at the edge of the input image and partly invisible. + +From the visualization of the failure cases, we find that, although an enlarged visual field promotes tracking performance in most cases, sometimes additional distractors or contexts brought by the fixed large visual field will cause drifting. Thus, it would be interesting to develop a method to dynamically adjust the visual field based on the surrounding of the target and its tracking history in the future. + +![](images/7b567f6faf97d55238a56a737177c2d84916c1ae9aed1ad4fb14de2aa2de5d8e.jpg) + +![](images/8a06fb675863b6b2a50953bb8195b124b0f24fa501c1ba2382d12df0a57511a1.jpg) + +![](images/0c2d842815ac77fa96f930257d99dfca754a2456680d0192a576fd24260018fe.jpg) + +![](images/016cc90256a6c97e0c96d802bd1806e407c45ee20f2d5555ba03c1763140f9db.jpg) + +Figure A3: Visualization of failure cases. Input to the tracker is displayed on the right side of each full image, where the image with blue box is the input image resized by our method and the image with green box is the input image resized by the baseline. Template image is visualized on the top-left corner of the full image. (left) A combination of out-of-view and fast motion causes an accumulated prediction error which leads to drifting. Localization with an enlarged visual field under some challenging scenarios may be difficult and its wrong prediction may produce less desirable results since the enlarged visual field brought more distractors. The localization error quickly accumulates to an overly enlarged visual field which then causes drifting. (right) Cluttered background with similar distractors causes drifting. The additional context (orange robotic arm) brought by the enlarged visual field makes the distractor on the left look like a horizontally flipped template, which causes drifting. +![](images/7aa74b1b057c01930741fd25954be01aa3f136a8e533d337b9a7f333ddb65141.jpg) +Ground Truth OSTrack-Zoom OSTrack + +![](images/2bff42c190abe6a3c82e0fe557abe9046d1dc6383c4896b135447e8fbd2edb04.jpg) \ No newline at end of file diff --git a/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/images.zip b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..84001143babf43fade20bf817b626067784a5839 --- /dev/null +++ b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ec4e7f4e1a04a1cb00eb42193aedd86d8ca91644cda63864502c65063b63170 +size 1034091 diff --git a/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/layout.json b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b44dd15e05dd08a3066ee0639880956b0650d4d1 --- /dev/null +++ b/zoomtracktargetawarenonuniformresizingforefficientvisualtracking/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd6d116a7e8816b0a4b9b198537c4b0a912a7b24f269b5a7c3c706cc1ca08ca1 +size 716704