diff --git a/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_content_list.json b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ff4c0d2e45b1991c0355a6d60cc2feb1aeeb38bb --- /dev/null +++ b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77c9b5a5ebdb00289c39363d62eaf6daef8699d6ba8dfb56368c32babb8f37d6 +size 121404 diff --git a/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_model.json b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..eeb8d43fab7eb7a3b41e76dd2242f0297737fbb7 --- /dev/null +++ b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75f9af59aa9df10234a24153fda1a60e5dc1df70de40c3c1f4256b3579286a11 +size 147869 diff --git a/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_origin.pdf b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5ac83b589edfc834e4cc2bacd432e695378bbed1 --- /dev/null +++ b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/69aa88c6-fb8d-40f7-9781-6d3ee96fc40e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d8d1b618ffca8f3255b3817ec8b643e25e6dae8ce685b790fa3643839c9e19c +size 4751941 diff --git a/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/full.md b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bdbb41a6fa67c25e2006693302a217e563436647 --- /dev/null +++ b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/full.md @@ -0,0 +1,526 @@ +# Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints + +Dapeng Jiang $^{12*}$ Xiangzhe Kong $^{13*}$ Jiaqi Han $^{4*}$ Mingyu Li $^{1}$ Rui Jiao $^{13}$ Wenbing Huang $^{56}$ Stefano Ermon $^{4}$ Jianzhu Ma $^{17}$ Yang Liu $^{13}$ + +# Abstract + +Cyclic peptides, characterized by geometric constraints absent in linear peptides, offer enhanced biochemical properties, presenting new opportunities to address unmet medical needs. However, designing target-specific cyclic peptides remains underexplored due to limited training data. To bridge the gap, we propose CP-Composer, a novel generative framework that enables zero-shot cyclic peptide generation via composable geometric constraints. Our approach decomposes complex cyclization patterns into unit constraints, which are incorporated into a diffusion model through geometric conditioning on nodes and edges. During training, the model learns from unit constraints and their random combinations in linear peptides, while at inference, novel constraint combinations required for cyclization are imposed as input. Experiments show that our model, despite trained with linear peptides, is capable of generating diverse target-binding cyclic peptides, reaching success rates from $38\%$ to $84\%$ on different cyclization strategies. + +# 1. Introduction + +Peptides occupy an intermediate position between small molecules and antibodies, offering unique advantages over conventional drug formats, such as higher specificity and enhanced cell permeability (Fosgerau & Hoffmann, 2015; Lee et al., 2019). Among them, cyclic peptides, which introduce geometric constraints into linear peptides, have earned + +*Equal contribution 1Institute for AI Industry Research (AIR), Tsinghua 2Xingjian College, Tsinghua University 3Department of Computer Science and Technology, Tsinghua University 4Stanford University 5Gaoling School of Artificial Intelligence, Renmin University of China 6Beijing Key Laboratory of Research on Large Models and Intelligent Governance 7Department of Electronic Engineering, Tsinghua University. Correspondence to: Yang Liu , Jianzhu Ma . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +![](images/39e11e75f9c0a5f032431127ad719858a66567fa1b24c3aea388dbc5c404440f.jpg) + +![](images/cf65fa513502138169ab59e0d22adac91c78b3e2985f996560cef1186ce70337.jpg) +Figure 1. Four common strategies to form cyclic peptides. (A) Stapled peptide where a lysine (K) at position $i$ and an aspartic acid (D) at position $i + 3$ are connected via dehydration condensation on side chains. The aspartic acid can also be replaced with glutamic acid (E) at position $i + 4$ . (B) Head-to-tail peptide where the first residue and the last residue form an amide bond for connection. (C) Disulfide peptide where two cysteines (C) non-adjacent in sequence are spatially connected through a disulfur bond. (D) Bicycle peptide which uses 1,3,5-trimethylbenezene to form a triangle between three cysteines (C) non-adjacent in sequence. + +![](images/92a394308f0b318d9ce35f9bf9c6a2dba6d7414498fb069c46f1360b08f891fa.jpg) + +![](images/4594a4327f6f12f3d6117ce8df0de14ab3dbd0788446e4e46b2d36f12c00e1f4.jpg) + +increasing attention (Zorzi et al., 2017). These constraints stabilize the peptide conformation, enhancing biochemical properties including binding affinity, in vivo stability, and oral bioavailability (Ji et al., 2024), which are essential for identifying desired drug candidates (Zhang & Chen, 2022). + +Existing literature on target-specific peptide generation primarily focuses on linear peptides, utilizing autoregressive models (Li et al., 2024a), multi-modal flow matching (Li et al., 2024b; Lin et al., 2024), and geometric latent diffusion (Kong et al., 2024). However, these methods are not directly applicable to cyclic peptide design due to the scarcity of available data (Rettie et al., 2024). Other approaches either impose geometric constraints on linear peptides through post-filtering (Wang et al., 2024b), which typically results in low acceptance rates, or rely on hard-coded model design (Rettie et al., 2024), which lacks generalizability across different cyclization patterns. In contrast, we hypothesize that the complex geometric constraints of cyclic peptides can be decomposed into fundamental unit constraints, resembling how complex mathematical formulas are built from basic arithmetic operations. While existing datasets rarely contain peptides that satisfy intricate cyclic + +constraints, they typically include abundant instances of single unit constraints and their random combinations, which serve as the building blocks for more complicated designs. Therefore, we reason that a framework could potentially be developed to learn these unit constraints from available linear peptide data, circumventing data limitations and enabling generalization to the diverse combined constraints required for cyclic peptide design. + +In this paper, we present CP-Composer, a framework for zero-shot cyclic peptide generation, relying solely on available data for linear peptides. Our work is equipped with the following contributions. 1) Decomposing cyclization strategies into fundamental geometric constraints. We identify four common chemical cyclization strategies (Figure 1) and formalize cyclic peptide design as a geometrically constrained generation problem. By analyzing cyclization patterns, we derive two fundamental unit constraints, type constraints and distance constraints, allowing description of diverse cyclization strategies to be specific combinations of these units. 2) Encoding constraints with geometric conditioning. We incorporate unit constraints into a the denoising network of a diffusion model (Kong et al., 2024) using additional vectorized embeddings of types and distances on geometric graphs, which enables flexible conditioning on compositions of constraints required for cyclic peptide generation. 3) Enabling zero-shot cyclic peptide design. We jointly train conditional and unconditional models on unit constraints and their random combinations found in linear peptide data. At inference, novel constraint combinations corresponding to desired cyclization strategies, which are unseen during training, are imposed as input conditions. The model is guided by the difference in score estimates between conditional and unconditional models, enabling zero-shot generalization to cyclic peptides. 4) Assessing generated cyclic peptides on comprehensive metrics. Experiments demonstrate that our CP-Composer generates cyclic peptides with complex geometric constraints effectively, achieving high success rates from $38\%$ to $84\%$ , while maintaining realistic distributions on amino acid types and dihedral angles. Molecular dynamics further confirm that the generated cyclic peptides exhibit desired binding affinity while forming more stable binding conformation compared to the native linear peptide binders. + +# 2. Related Work + +Geometric diffusion models. Besides their success on applications like image (Rombach et al., 2021; Song et al., 2020; 2021a) and video (Ho et al., 2022) generation, diffusion models have become a preeminent tool in modeling the distribution of structured data in geometric domains. While early works have explored their applicability on tasks like molecule generation (Xu et al., 2022; 2023; Park & Shen, + +2024), there have been growing interests in scaling these models to systems of larger scales, such as antibody (Luo et al., 2022), peptide (Kong et al., 2024), and protein (Yim et al., 2023; Watson et al., 2023; Anand & Achim, 2022) in general, or to those with complex dynamics, such as molecular dynamics simulation (Han et al., 2024b). Despite fruitful achievements, how to impose diverse geometric constraints stills remain under-explored for geometric diffusion models, which we aim to address in this work. + +Diffusion guidance. Diffusion sampling can be flexibly controlled by progressively enforcing guidance through the reverse denoising process. Dhariwal & Nichol (2021) proposes classifier-guidance, which employs an additionally trained classifier to amplify the guidance signal. Classifier-free guidance (CFG) (Ho & Salimans, 2022) is a more widely adopted alternative that replaces the classifier with the difference of the conditional and unconditional score, which has been further generalized to the multi-constraint scenario by composing multiple scores in diffusion sampling (Liu et al., 2022; Huang et al., 2023). Diffusion guidance has also been explored for solving inverse problems on images (Song et al., 2024; Kawar et al., 2022; Song et al., 2021b), molecules (Bao et al., 2022), and PDEs (Jiang et al., 2024). Our approach instead extends CFG to compose geometric constraints with application to cyclic peptide design. + +Peptide design. Target-specific peptide design initially relied on physical methods using statistical force fields and fragment libraries (Hosseinzadeh et al., 2021; Swanson et al., 2022). With the rise of equivariant neural networks (Satorras et al., 2021; Han et al., 2024a), geometric deep generative models have emerged. PepFlow (Li et al., 2024b) and PPFLOW (Lin et al., 2024) use multi-modal flow matching, while PepGLAD (Kong et al., 2024) applies geometric latent diffusion with a full-atom autoencoder. However, these methods struggle with cyclic peptide design due to limited data. Prior works introduce disulfide bonds via post-filtering (Wang et al., 2024b) or enforce head-to-tail cyclization through hard-coded model design (Rettie et al., 2024). In contrast, our approach decomposes cyclization into fundamental unit constraints, enabling zero-shot cyclic peptide generation with broad flexibility across diverse patterns. + +# 3. Method + +In this section, we detail our method, CP-Composer. We first introduce basic concepts of peptide modeling and cyclic strategies in Sec. 3.1 and specify these strategies as constraints in Sec. 3.2. We further present the guided generation framework and the encoding strategy for incorporating the constraints in Sec. 3.3 and Sec. 3.4, respectively. We finally describe the training and inference schemes in Sec. 3.5. The overall workflow is depicted in Fig. 2. + +![](images/6dbd7a1992e20bad51096f53278103edc3ea628d94f350840fd9f58dcc1249b2.jpg) +Figure 2. Overall training and inference design of CP-Composer. We define two unit constraints, type constraint and distance constraint (§ 3.2), which are incorporated into the diffusion model via geometric conditioning (§ 3.4). During training, the model learns from single unit constraints and their combinations observed in linear peptides. At inference, novel combinations corresponding to specific cyclization strategies are imposed with guidance signal amplified by classifier-free guidance, enabling zero-shot cyclic peptide design (§ 3.5). + +# 3.1. Preliminaries + +Representing peptide as geometric graph. We represent the binding site and peptide as a fully-connected geometric graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges. Each node is a residue, bound with node features $(\pmb{h}_i,\vec{\pmb{X}}_i)$ with $h_i\in \mathbb{R}^m$ being the one-hot encoding of the amino acid type and $\vec{\pmb{X}}_i\in \mathbb{R}^{k_i\times 3}$ being the coordinate of the $k_{i}$ atoms. + +Geometric latent diffusion model for peptide design. Our model is built on PepGLAD (Kong et al., 2024), a latent geometric diffusion model, but is adaptable to other diffusion-based frameworks. It employs a variational autoencoder to project peptide graphs $\mathcal{G}$ into residue-level latents $\mathcal{G}_z = \{(z_i,\vec{z}_i)\}_{i = 1}^N$ with an encoder $\mathcal{E}_{\phi}$ , and a corresponding decoder $\mathcal{D}_{\xi}$ for the inverse, where $z_{i}\in \mathbb{R}^{8}$ is the E(3)-invariant latent and $\vec{z}_i\in \mathbb{R}^3$ is the E(3)-equivariant counterpart. A diffusion model is learned in the compact latent space, with the denoiser $\epsilon_{\theta}(\mathcal{G}_z^{(t)},t)$ parameterized by an equivariant GNN (Kong et al., 2023). The sampling process initiates with latents $\mathcal{G}_z^{(T)} = \{(z_i^{(T)},\vec{z}_i^{(T)})\}_{i = 1}^N$ drawn from the prior and gradually denoises it using DDPM (Ho et al., 2020) sampler for a total of $T$ steps. The final latents $\mathcal{G}_z^{(0)}$ are decoded back to the data space using decoder $\mathcal{D}_{\xi}$ . + +Cyclic peptide and cyclization strategies. Unlike common linear peptides, which are chain-like structures, a cyclic peptide is formed by animo acids connected in a ring structure. As shown in Fig. 1, we primarily focus on four types of cyclic peptides in this paper: stapled, head-to-tail, disulfide and bicycle peptides. Each strategy applies constraints on + +specific amino acid types and/or their pairwise distances. Taking the disulfide peptide as an example (Fig. 1C), to link two cysteines at indices $i, j$ with a disulfur bond of length $d_{S}$ , a disulfide peptide is constrained by + +$$ +\mathbb {C} _ {\text {D i s u l f i d e}, i, j} = \left(\left\{\arg \max \left(h _ {i}\right) = \arg \max \left(h _ {j}\right) = k _ {C} \right\}, \right. +$$ + +$$ +\left\{\left\| \vec {X} _ {i} - \vec {X} _ {j} \right\| _ {2} = d _ {S} \right\}), \tag {1} +$$ + +where $k_{C}$ represents the index of cysteine (C) in the one-hot embeddings. This constraint can be decomposed into two node-level constraints on the amino acid types and one edge-level constraint on the distance. We refer to these as unit geometric constraints with further details on these constraints provided in Sec. 3.2. We demonstrate that all four cyclic strategies can be expressed as combinations of these unit geometric constraints in Appendix B. + +# 3.2. Decomposing Cyclization Strategies as Geometric Constraints + +In this work, we consider two types of unit geometric constraints, namely type constraint and distance constraint. In particular, type constraint operates on node-level by enforcing the node to be of certain type, while distance constraint takes place on edge-level, specifying a pair of nodes to reside at a certain distance. + +Definition 3.1 (Type constraint). A type constraint is a set $\mathbb{C}_T\coloneqq \{(i,l_i)\}_{i\in \mathcal{V}_T}$ where each entry $(i,l_{i})$ represents that node $i$ should be of type $l_{i}$ , while $\mathcal{V}_T\subseteq \mathcal{V}$ is the set of nodes to enforce the type constraint. + +Definition 3.2 (Distance constraint). A distance constraint is a set $\mathbb{C}_D\coloneqq \{(i,j,d_{ij})\}_{(i,j)\in \mathcal{E}_D}$ where each element $(i,j,d_{ij})$ represents that node $i$ and $j$ should be positioned at the distance of $d_{ij}$ , while $\mathcal{E}_D\subseteq \mathcal{E}$ is the set of edges to enforce the distance constraint. + +Notably, our taxonomy of geometric constraints is particularly interesting due to its completeness, in the sense that each of the cyclic strategies $\mathbb{C}$ described in Sec. 3.1 can be decomposed into combinations of type constraints $\mathbb{C}_T$ and/or distance constraints $\mathbb{C}_D$ . We defer the detailed explanations to Appendix B. + +Problem definition. We formulate the task of cyclic peptide design as finding the candidate peptides $\mathcal{G}$ that satisfy constraint $\mathbb{C}$ , where $\mathbb{C}$ is any one of the four cyclic constraints. + +# 3.3. Inverse Design with Diffusion Guidance + +To perform inverse design, a widely adopted approach is to progressively inject certain guidance term into diffusion sampling towards the design target (Bao et al., 2022; Song et al., 2023), which share similar spirit as classifier guidance (Dhariwal & Nichol, 2021). Specifically, at each sampling step $t$ , the conditional score is derived by Bayes' rule: + +$$ +\begin{array}{l} \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log p _ {t} (\mathcal {G} _ {z} ^ {(t)} | \mathbb {C}) = \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log p _ {t} (\mathcal {G} _ {z} ^ {(t)}) \\ + \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log p _ {t} (\mathbb {C} | \mathcal {G} _ {z} ^ {(t)}), \tag {2} \\ \end{array} +$$ + +where the last term $\nabla_{\mathcal{G}_{z}^{(t)}}\log p_t(\mathbb{C}|\mathcal{G}_z^{(t)})$ takes the effect as guidance, which can typically be a hand-crafted energy function (Kawar et al., 2022; Song et al., 2024) or a pretrained neural network (Dhariwal & Nichol, 2021; Bao et al., 2022). + +However, empirically the approach is often demonstrated unfavorable since the guidance term in Eq. 2 is the gradient of neural network, which detriments sample quality due to adversarial effect (Ho & Salimans, 2022). Distinct from the approach above, we propose an alternative that, inspired by classifier-free guidance, guides the sampling by directly composing unconditional and conditional score without additional gradient terms. In detail, we have, + +$$ +\tilde {\epsilon} _ {\theta} \left(\mathcal {G} _ {z} ^ {(t)}, \mathbb {C}, t\right) = (w + 1) \epsilon_ {\theta} \left(\mathcal {G} _ {z} ^ {(t)}, \mathbb {C}, t\right) - w \epsilon_ {\theta} \left(\mathcal {G} _ {z} ^ {(t)}, t\right) \tag {3} +$$ + +where $w$ is the guidance weight and the guided score $\tilde{\epsilon}_{\theta}$ will replace $\epsilon_{\theta}$ for score computation. In particular, the rationale of Eq. 2 and Eq. 3 are linked by the following distribution + +$$ +\tilde {p} _ {t} \left(\mathcal {G} _ {z} ^ {(t)} | \mathbb {C}\right) \propto p _ {t} \left(\mathcal {G} _ {z} ^ {(t)}\right) p _ {t} \left(\mathbb {C} \mid \mathcal {G} _ {z} ^ {(t)}\right) ^ {w}, \tag {4} +$$ + +with the corresponding conditional score + +$$ +\begin{array}{l} \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log \tilde {p} _ {t} (\mathcal {G} _ {z} ^ {(t)} | \mathbb {C}) \\ = \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log p _ {t} (\mathcal {G} _ {z} ^ {(t)}) + w \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log p _ {t} (\mathbb {C} | \mathcal {G} _ {z} ^ {(t)}), \\ \approx \epsilon_ {\theta} \left(\mathcal {G} _ {z} ^ {(t)}, t\right) + w \nabla_ {\mathcal {G} _ {z} ^ {(t)}} \log p _ {t} \left(\mathbb {C} \mid \mathcal {G} _ {z} ^ {(t)}\right). \tag {5} \\ \end{array} +$$ + +By further leveraging the relation $\nabla_{\mathcal{G}_z^{(t)}}\log p_t(\mathbb{C}|\mathcal{G}_z^{(t)}) =$ $\nabla_{\mathcal{G}_z^{(t)}}\log p_t(\mathcal{G}_z^{(t)}|\mathbb{C}) - \nabla_{\mathcal{G}_z^{(t)}}\log p_t(\mathcal{G}_z^{(t)})\approx \epsilon_\theta (\mathcal{G}_z^{(t)},\mathbb{C},t) -$ $\epsilon_{\theta}(\mathcal{G}_{z}^{(t)},t)$ into Eq. 5, we obtain the expression in Eq. 3. + +Conceptually, Eq. 2 adopts energy-guidance that directly models $\log p_t(\mathbb{C}|\mathcal{G}_z^{(t)})$ by an externally trained energy function. Eq. 3 instead follows the convention in classifier-free guidance by rewriting $\nabla_{\mathcal{G}_z^{(t)}}\log p_t(\mathbb{C}|\mathcal{G}_z^{(t)}) = \nabla_{\mathcal{G}_z^{(t)}}\log p_t(\mathcal{G}_z^{(t)}|\mathbb{C}) - \nabla_{\mathcal{G}_z^{(t)}}\log p_t(\mathcal{G}_z^{(t)})\approx \epsilon_\theta (\mathcal{G}_z^{(t)},\mathbb{C},t) - \epsilon_\theta (\mathcal{G}_z^{(t)},t)$ , which gives Eq. 3 after simplification. + +In recent studies, how to obtain the conditional score $\epsilon_{\theta}(\mathcal{G}_{z}^{(t)},\mathbb{C},t)$ still remains unclear. Notably, $\mathbb{C}$ is a complicated geometric constraint, which is fundamentally different from a class label (Ho & Salimans, 2022) or a target value (Bao et al., 2022), where an embedding (e.g., one-hot for class label) can be readily adopted as the control signal to feed into the denoiser. In the following section, we will introduce our approach to encode type and distance constraint. + +# 3.4. Encoding Constraints via Geometric Conditioning + +To encode the constraints as control signals, we propose geometric conditioning that embeds the type and distance constraints into the denoiser through vectorization. + +Conditioning type constraints. For type constraint $\mathbb{C}_T = \{(i,l_i)\}_{i\in \mathcal{V}_T}$ where $l_{i}\in \{0,1,\dots ,K - 1\}$ is the desired node type for node $i$ , we operate at node-level by augmenting the E(3)-invariant node feature $\pmb{h}_i$ with an additional vector $\pmb{l}_i\in \mathbb{R}^K$ which serves as the control signal. This corresponds to the encoding function $f_{T}(\mathbb{C}_{T}) = \{(i,l_{i})\}_{i\in \mathcal{V}_{T}}$ that lifts $l_{i}$ to the embedding space where + +$$ +\boldsymbol {l} _ {i} = \left\{ \begin{array}{l l} \operatorname {O n e - h o t} \left(l _ {i}\right) & i \in \mathcal {V} _ {T}, \\ \boldsymbol {0} & i \in \mathcal {V} \backslash \mathcal {V} _ {T}. \end{array} \right. \tag {6} +$$ + +Such design of the control signal is simple yet effective, since different type constraints will induce different signal $l_{i}$ , thus making the constraints distinguishable to the network. More importantly, for any type constraint, the conditional score $\epsilon_{\theta}(\mathcal{G}_{z}^{(t)},\mathbb{C},t)$ obtained by this means still enjoys E(3)-equivariance, since $l_{i}$ is E(3)-invariant. + +Conditioning distance constraints. For distance constraint $\mathbb{C}_D\coloneqq \{(i,j,d_{ij})\}_{(i,j)\in \mathcal{E}_D}$ where $d_{ij}$ specifies the distance between node $i$ and $j$ , we instead design the encoding function as $f_{D}(\mathbb{C}_{D}) = \{(i,j,\pmb{d}_{ij})\}_{(i,j)\in \mathcal{E}_{D}}$ , where the control signal $\pmb{d}_{ij}$ is defined at edge-level: + +$$ +\boldsymbol {d} _ {i j} = \left\{ \begin{array}{l l} \operatorname {R B F} \left(d _ {i j}\right) & (i, j) \in \mathcal {E} _ {D}, \\ \phi & (i, j) \in \mathcal {E} \backslash \mathcal {E} _ {D}. \end{array} \right. \tag {7} +$$ + +Here $\mathrm{RBF}(\cdot)$ is the radial basis kernel that lifts the distance from a scalar to a high-dimensional vector (Schütt et al., 2018), and $\phi$ denotes that the edges not in the set $\mathcal{E}_D$ will not be featurized. The control signal $d_{ij}$ is then viewed as a special type of edge feature, which will be further processed by an additional dyMEAN layer (Kong et al., 2023), whose input will be the subgraph $(\mathcal{V},\mathcal{E}_D)$ with edge features $\{d_{ij}\}_{(i,j)\in \mathcal{E}_D}$ . More details are deferred to Appendix C.2. Akin to the analysis for type constraints, our way of encoding distance constraints also preserve the E(3)-equivariance of the conditional score, with proof in Appendix A.2. + +Moreover, the encoding is also injective, as formally stated in Theorem 3.3. Such property is crucial for effective guidance since different constraints will be projected as different control signals, always making them distinguishable to the score network. + +Theorem 3.3 (Injective). Both $f_{T}$ and $f_{D}$ are injective. That is, $f(\mathbb{C}^{1}) = f(\mathbb{C}^{2})$ if and only if $\mathbb{C}^{1} = \mathbb{C}^{2}$ , where $(f, \mathbb{C}^{1}, \mathbb{C}^{2})$ can be $(f_{T}, \mathbb{C}_{T}^{1}, \mathbb{C}_{T}^{2})$ or $(f_{D}, \mathbb{C}_{D}^{1}, \mathbb{C}_{D}^{2})$ . Furthermore, their product function $\tilde{f}(\mathbb{C}_T, \mathbb{C}_D) := (f_T(\mathbb{C}_T), f_D(\mathbb{C}_D))$ is also injective. + +Composing type and distance constraints. Our approach of encoding the type and distance constraints in node- and edge-level respectively also facilitates conveniently composing them together. In particular, we can easily devise $\epsilon_{\theta}(\mathcal{G}_{z}^{(t)},\mathbb{C}_{T},\mathbb{C}_{D},t)$ by simultaneous feeding the type and distance control signals in Eq. 6 and 7 into the score network, which corresponds to enforcing a compositional constraint $(\mathbb{C}_T,\mathbb{C}_D)$ . This extension is critical since it enables us to enforce richer combinations of the constraints at inference time, even generalizing to those unseen during training. In this way, we are able to design cyclic peptides with training data that only consist of linear peptides due to the generalization capability of our approach. + +# 3.5. Training and Inference + +With the geometric conditioning technique to derive the conditional score, we are now ready to introduce the training and inference framework. + +Design space for constraints. For a linear peptide $\mathcal{G}$ sampled from training set with features $\{(h_i,\vec{\pmb{X}}_i)\}_{i = 1}^N$ , we consider the following design space for type constraint: + +$$ +\mathcal {C} _ {T} (\mathcal {G}) = \left\{\mathbb {C} _ {T} \mid \mathbb {C} _ {T} = \left\{(i, \arg \max \left(\boldsymbol {h} _ {i}\right) \right\} _ {i \in \mathcal {V} _ {T}}, \left| \mathcal {V} _ {T} \right| \leq 4 \right\}, \tag {8} +$$ + +which include all of the type constraints that control the type of the node to be the same as that of node $i$ in $\mathcal{G}$ and the number of constraints to be fewer or equal to 4. For distance + +# Algorithm 1 Training Procedure of CP-Composer + +Input: Data distribution $\mathcal{D}$ , mask probabilities for type and distance constraints $p_T, p_D$ , encoder $\mathcal{E}_{\phi}$ , score network $\epsilon_{\theta}$ , diffusion scheduler Scheduler( $\cdot$ ) + +1: while not converged do +2: Sample $\mathcal{G}\sim \mathcal{D},\mathbb{C}_T\sim \mathrm{Unif}(\mathcal{C}_T(\mathcal{G}))$ and $\mathbb{C}_D\sim \mathrm{Unif}(\mathcal{C}_D(\mathcal{G}))$ {c.f. Eq. 8-9} +3: $\mathbb{C}_T\gets \emptyset$ with probability $p_T$ +4: $\mathbb{C}_D\gets \emptyset$ with probability $p_D$ +5: $(\pmb {\epsilon},\mathcal{G}_z^{(t)},t)\gets \mathrm{Schedule}(\mathcal{E}_\phi (\mathcal{G}))$ +6: Take gradient step on + +$$ +\mathcal {L} (\theta) = \| \boldsymbol {\epsilon} - \boldsymbol {\epsilon} _ {\theta} \left(\mathcal {G} _ {\boldsymbol {z}} ^ {(t)}, \mathbb {C} _ {T}, \mathbb {C} _ {D}, t\right) \| _ {2} ^ {2} +$$ + +7: end while + +constraint, we select the following design space: + +$$ +\begin{array}{l} \mathcal {C} _ {D} (\mathcal {G}) = \left\{\mathbb {C} _ {D} | \mathbb {C} _ {D} = \left\{(i, j, \| \vec {\boldsymbol {X}} _ {i} - \vec {\boldsymbol {X}} _ {j} \| _ {2}) \right\} _ {(i, j) \in \mathcal {E} _ {D}}, \right. \\ d _ {\mathcal {G}} (i, j) \in \{3, 4, 6 \}, | \mathcal {E} _ {D} | \leq 6 \}, \tag {9} \\ \end{array} +$$ + +which spans across all possible distance constraints that specify the distance between node $i$ and $j$ to be their Euclidean distance in $\mathcal{G}$ , while the shortest path distance between $i$ and $j$ , i.e., $d_{\mathcal{G}}(i,j)$ , equals to 3, 4, or 6. We design $\mathcal{C}_T(\mathcal{G})$ and $\mathcal{C}_D(\mathcal{G})$ such that $\mathcal{C}_T(\mathcal{G}) \times \mathcal{C}_D(\mathcal{G})$ covers the constraint space of cyclic peptides, where $\times$ is the Cartesian product. This permits our approach to generalize to novel compositions within the space $\mathcal{C}_T(\mathcal{G}) \times \mathcal{C}_D(\mathcal{G})$ at inference time without necessarily seeing such particular combination in training data, e.g., the four compositional constraints of cyclic peptides. + +Training. We employ a single network $\epsilon_{\theta}$ to jointly optimize the conditional and unconditional score during training, following the paradigm in Ho & Salimans (2022). At each training step, we first sample $\mathcal{G}$ from training data distribution $\mathcal{D}$ and derive the candidate constraints $\mathcal{C}_T(\mathcal{G})$ and $\mathcal{C}_D(\mathcal{G})$ . We then sample a type constraint $\mathbb{C}_T$ and a distance constraint $\mathbb{C}_D$ uniformly from the candidates $\mathcal{C}_T(\mathcal{G})$ and $\mathcal{C}_D(\mathcal{G})$ , respectively. To jointly optimize the conditional and unconditional score networks, we replace $\mathbb{C}_T$ and $\mathbb{C}_D$ by empty set $\varnothing$ with probability $p_T$ and $p_D$ respectively, where the empty set will enforce no meaningful type and/or distance control signal which degenerates to the unconditional score. Finally, we encode $\mathcal{G}$ into latent space by $\mathcal{E}_{\phi}$ , sample the noise $\epsilon$ and diffusion step $t$ , and compute the noised latent $\mathcal{G}_z^{(t)}$ . The noise prediction loss (Ho et al., 2020) is adopted to train the score network. We present the detailed training procedure in Alg. 1. + +Inference. At inference time, we will select one of the four cyclic constraints at one time. Each constraint is represented by $(\mathbb{C}_T^*,\mathbb{C}_D^*)$ where $\mathbb{C}_T^*$ and $\mathbb{C}_D^*$ are the target type and distance constraint, respectively. We start from the initial latent $\mathcal{G}_{\boldsymbol{z}}^{(T)}$ sampled from the prior and perform standard + +Table 1. Success rates and KL divergence for generated samples from different cyclization strategies. + +
Stapled peptideHead-to-tail peptide
Succ.AA-KLB-KLS-KLSucc.AA-KLB-KLS-KL
PepGLAD (Kong et al., 2024)22.80%0.10351.14010.012630.23%0.10521.13470.0125
w/ EG (Bao et al., 2022)25.41%0.07441.18210.012761.63%0.07981.08910.0128
CP-Composer w = 0.025.71%0.09321.11790.012637.21%0.10211.07870.0118
CP-Composer w = 1.030.00%0.10171.12350.016155.81%0.10081.06040.0124
CP-Composer w = 2.021.42%0.10671.09960.014765.11%0.10551.10050.0126
+CADS (Sadat et al., 2024)27.14%0.08071.09750.011945.54%0.07981.05890.0132
CP-Composer w = 5.038.57%0.18121.15150.018074.42%0.13201.05230.0122
CP-Composer w = 10.032.86%0.35321.17260.023268.60%0.17841.03010.0175
Disulfide peptideBicycle peptide
Succ.AA-KLB-KLS-KLSucc.AA-KLB-KLS-KL
PepGLAD (Kong et al., 2024)00.08081.13240.012400.08381.18230.0238
w/ EG (Bao et al., 2022)00.07111.08910.010300.07291.09680.0228
CP-Composer w = 0.07.50%0.10161.10620.015100.12251.19800.0252
CP-Composer w = 1.021.25%0.14771.09390.015111.53%0.16381.14900.0395
CP-Composer w = 2.041.25%0.28731.09940.037930.76%0.21471.11950.0735
+CADS (Sadat et al., 2024)3.75%0.09391.07880.01623.85%0.09011.06240.0684
CP-Composer w = 5.082.50%0.51391.03970.191384.62%0.33851.07590.3351
CP-Composer w = 10.062.50%1.69654.03121.104638.46%1.26778.19350.3374
+ +# Algorithm 2 Inference Procedure of CP-Composer + +Input: Target type and distance constraint $(\mathbb{C}_T^*,\mathbb{C}_D^*)$ , diffusion sampler $\mathrm{Sampler}(\cdot)$ , guidance weight $w$ , step $T$ , score network $\epsilon_{\theta}$ , decoder $\mathcal{D}_{\xi}$ + +1: Initialize latents $\mathcal{G}_z^{(T)}$ from prior +2: for $t = T, T - 1, \dots, 1$ do +3: Compute score $\tilde{\epsilon} \gets (w + 1)\epsilon_{\theta}(\mathcal{G}_{\boldsymbol{z}}^{(t)}, \mathbb{C}_T^*, \mathbb{C}_D^*, t) - w\epsilon_{\theta}(\mathcal{G}_{\boldsymbol{z}}^{(t)}, \varnothing, \varnothing, t)$ {Eq. 10} +4: $\mathcal{G}_z^{(t - 1)}\gets \mathrm{Samp}\text{ler} (\mathcal{G}_z^{(t)},\tilde{\epsilon},t)$ {Denoising step} +5: end for + +Return: $\mathcal{D}_{\xi}(\mathcal{G}_z^{(0)})$ + +diffusion sampling with the guided score: + +$$ +\begin{array}{l} \tilde {\epsilon} \left(\mathcal {G} _ {\boldsymbol {z}} ^ {(t)}, \mathbb {C} _ {T} ^ {*}, \mathbb {C} _ {D} ^ {*}, t\right) = (w + 1) \epsilon_ {\theta} \left(\mathcal {G} _ {\boldsymbol {z}} ^ {(t)}, \mathbb {C} _ {T} ^ {*}, \mathbb {C} _ {D} ^ {*}, t\right) \\ - w \epsilon_ {\theta} \left(\mathcal {G} _ {z} ^ {(t)}, \varnothing , \varnothing , t\right), \tag {10} \\ \end{array} +$$ + +where a modified classifier-free guidance is employed to further amplify the guidance signal. The sample is acquired by decoding $\mathcal{G}_z^{(0)}$ back to the data space using the decoder $\mathcal{D}_{\xi}$ . The inference procedure is depicted in Alg. 2. + +# 4. Experiments + +Task. We evaluate CP-Composer on target-specific cyclic peptide design, aiming to co-design the sequence and the binding structure of cyclic peptides given the binding site on the target protein. + +Dataset. We utilize PepBench and ProtFrag datasets (Kong et al., 2024) for training and validation, with the LNR + +dataset (Kong et al., 2024; Tsaban et al., 2022) for testing. PepBench contains 4,157 protein-peptide complexes for training and 114 complexes for validation, with a target protein longer than 30 residues and a peptide binder between 4 to 25 residues. ProtFrag encompasses 70,498 synthetic samples resembling protein-peptide complexes, which are extracted from local contexts in protein monomers. LNR consists of 93 protein-peptide complexes curated by domain experts, with peptide lengths ranging from 4 to 25 residues. + +We evaluate zero-shot cyclic peptide generation in Sec. 4.1, demonstrate the flexibility of composable geometric constraints with high-order multi-cycle constraints in Sec. 4.2, and assess the stability and binding affinity of the generated cyclic peptides through molecular dynamics in Sec. 4.3. + +# 4.1. Zero-Shot Cyclic Peptide Generation + +Metrics. We evaluate the generated peptides based on two key aspects: cyclic constraint satisfaction and generation quality. For each target protein in the test set, we generate five candidate peptides and compute the following metrics. Success Rate (Succ.) measures the proportion of target proteins for which at least one of the five generated peptides satisfies the geometric constraints of the specified cyclization strategy. Amino Acid Divergence (AA-KL) calculates the Kullback-Leibler (KL) divergence between the amino acid composition of reference peptides and all of the generated samples. For cyclization patterns that impose amino acid constraints at specific positions, we exclude these constrained amino acid types when computing the distributions, as successful designs inherently deviate from the reference distribution on these amino acid types. Backbone Dihe + +![](images/3e3402f2dad87cbb911db024e48dc7335cc366c2fc478914a40b70496211a6a1.jpg) +Stapled Peptide + +![](images/4e74f09170632ef1293727b81ac56e1de0cd4f56b51df41a2f463eb9b94513c2.jpg) +Head-To-Tail Peptide + +![](images/ff8905b8d5f60eaa3c5197ec9cb8ab89797dff7e6133a375b578643b1abcab4a.jpg) +Disulfide Peptide +Figure 3. Four types of generated cyclic peptides, with the red boxes highlighting the position for cyclization. + +![](images/eaba9b5b66b50266d6f1d42dc8e4b471b75f67685cd8fdac9948d3738bb8cf92.jpg) +Bicycle Peptide + +dral Angle Divergence (B-KL) and Side-Chain Dihedral Angle Divergence (S-KL) indicate the KL divergence between the distribution of the dihedral angles in reference peptides and the generated samples, assessing rationality in the generated backbone and side chains, respectively. + +Baselines. First, we compare our CP-Composer with the backbone model PepGLAD (Kong et al., 2024) without additional guidance to validate the effectiveness of our framework with composable geometric constraints. We further implement a baseline with the prevailing Energy-based Guidance (EG) (Dhariwal & Nichol, 2021; Bao et al., 2022) applied to node embeddings and pairwise distances to assess the advantages of our approach, with implementation details in Appendix C. To compare CP-Composer with other cyclic peptide generation method, we implement DiffPepBuilder (Wang et al., 2024a), a model specifically designed for disulfide peptides. Furthermore, we also incorporate our method with a advanced sampler Condition Annealed Diffusion Sampler(CADS) (Sadat et al., 2024) to analysis the performance of our method combining with other sampler. + +Results. As shown in Table 1, CP-Composer significantly improves constraint satisfaction rates across all cyclization strategies compared to unguided baselines, while maintaining fidelity to reference distributions in amino acid composition and structural dihedral angles. The energy-guided baseline proves effective in simple cases requiring control over a single pairwise distance (i.e., head-to-tail cyclization), but struggles with more complex scenarios involving combinations of distance constraints and type constraints. This limitation is evident from its lower success rates on stapled peptides and complete failure in handling more intricate cyclization patterns including disulfide and bicycle peptides. In contrast, CP-Composer consistently achieves high success rates across these challenging cases, demonstrating the strength of our framework design with compos + +able geometric constraints. In Table 3, we further compare CP-Composer with DiffPepBuilder (Wang et al., 2024a). Although DiffPepbuilder is a method specifically designed for disulfide peptide generation, CP-Composer shows a better success rates than DiffPepbuilder. These results show the effectiveness of CP-Composer. We visualize examples of generated peptides for each cyclization strategy in Fig. 3, with more cases in Appendix E. Furthermore, the weight parameter $w$ effectively balances success rates and generation quality, with increasing control strength yielding higher constraint satisfaction yet slightly higher KL divergence, indicating a trade-off between constraint satisfaction and distributional fidelity. This flexibility allows users to customize the method based on specific application needs, prioritizing either higher success rates or closer resemblance to natural peptide distributions. + +# 4.2. Flexibility in High-Order Combinations + +![](images/e8306be415b50f2cd0291f8d26d876293e1261f7a571c00a78bdd9c8141ff203.jpg) +Stapled + Stapled + +![](images/04dccf550e87c41882c1f62c22a7dd86fb9cb60dd5e59bfd306cfd72645b6342.jpg) +-S-S-+Head-to-Tail + +![](images/e2770244442cd54b8c06e04350428bff09594605c6346eb7f7b99f0e1baaf938.jpg) +-S-S- + -S-S- +Figure 4. Generated peptides conforming to high-order combinations of cyclizations, with the red boxes highlighting the positions for cyclization. + +![](images/78e9a8b9daba1152986c40b6105b19743800458836b816d0b5a8230d3f4ca55d.jpg) +-S-S-+ -S-S-+ -S-S- + +Setup. To demonstrate the flexibility of our framework in handling composable geometric constraints, we investigate more complex and customized scenarios that involve multiple cyclizations within a single peptide. Specifically, we explore the following high-order combinations: 2*Stapled has two stapled pairs in one peptide. -S-S- + H-T includes one disulfide bond and one head-to-tail in one peptide; 2*-S-S- contains two disulfide bonds in one peptide; 3*-S-S- involves three disulfide bonds in one peptide; The flexibility of CP-Composer enables seamless implementation of these complex constraints: simply combining the individual unit constraints for each cyclization strategy allows the model to accommodate them simultaneously. + +Results. As shown in Table 2, despite the increasing complexity of the constraints, CP-Composer achieves reasonable + +![](images/03c50c343036a0d7cc3bba243f4e9d3b7a8df720ee1cfd0d8af83abe691a8365.jpg) + +![](images/0489f9de1e4faa61d07c6aa03f5b87de8339fae5b2386366c9c9dfae5d3b4a76.jpg) + +![](images/2308ce81ecaaf3616bb79547bb02b5c816d1596a8e1a63005fe4c03b3e914f7b.jpg) +Figure 5. RMSD trajectories from 100 ns molecular dynamics simulations for two target proteins, each bound to either a native linear peptide binder or a cyclic peptide generated by our model. The target proteins and their corresponding linear peptide binders are derived from PDB 3RC4 (top) and PDB 4J86 (bottom), respectively. + +![](images/b49a002b7d2f797f1d61d9fdeeb8bef35ffb9f497e3158e0e04c93174cc2d69a.jpg) + +success rates across all high-order cyclization scenarios. The control strength parameter $w$ remains effective, with higher values leading to enhanced success rates. The only exception is 2*Stapled, likely due to the inherent difficulty of the Staple strategy, which already exhibits the lowest success rate in Table 1. This indicates that our framework effectively learns to generate peptides that conform to the joint distribution of multiple constraints. Fig. 4 visualizes peptides with these high-order cyclization patterns, highlighting the flexibility of CP-Composer in designing structurally feasible peptides tailored for customized requirements. + +Table 2. Success rates for high-order combinations of multiple cyclizations within the same peptide. + +
2*Stapled-S-S-+H-T2*-S-S-3*-S-S-
w = 1.02.5%000
w = 2.07.5%10.0%26.0%17.2%
w = 2.57.5%20.0%34.0%34.5%
w = 3.07.5%26.0%62.0%65.5%
+ +Table 3. Success rates comparison between DiffPepBuilder and our method + +
Succ.Disulfide Peptide2*-S-S-
CP-Composer41.25%62.00%
DiffPepBuilder (Wang et al., 2024a)23.07%32.78%
+ +In Table 3, we compare CP-Composer with DiffPepBuilder. The results show that our method outperforms the cyclic peptide generation model under high-order cyclization scenario: two disulfide bonds in one peptide. This indicates the flexibility of our framework. + +# 4.3. Evaluations by Molecular Dynamics + +Setup. We perform molecular dynamics (MD) simulations using the Amber22 package (Salomon-Ferrer et al., 2013) to compare the stability and binding affinity of linear peptides from the test set with cyclic peptides generated by our model. We use the ff14SB force field for proteins and peptides (Maier et al., 2015) with all systems solvated in water, and $150\,nM\,Na^{+}/Cl^{-}$ counterions are added to neutralize charges and simulate the normal saline environment (Jorgensen et al., 1983; Li et al., 2024c). The SHAKE algorithm is applied to constrain covalent bonds involving hydrogen atoms (Ryckaert et al., 1977), while non-bonded interactions are truncated at $10.0\,\text{\AA}$ , with long-range electrostatics treated using the PME method. To estimate peptide binding energies, we further employ MM/PBSA calculations (Genheden & Ryde, 2015). Notably, while MD simulations provide high accuracy in evaluating conformational stability and binding affinity, they are very computationally expensive. Therefore, we randomly select two target proteins from the test set and generate one cyclic peptide using head-totail and disulfide bond cyclization strategies for evaluation. More details on the setup of MD are in Appendix C.3. + +Results. As shown in Fig. 5, the root mean square deviation (RMSD) trajectories of the two linear peptides from the test set exhibit significant fluctuations, indicating vibrate binding conformations. In contrast, the RMSD trajectories of the cyclic peptides generated by our model are quite flat, producing consistently lower RMSD compared to the linear peptides, suggesting that the introduced geometric constraints effectively enhance conformational stability. Table 4 presents the average RMSD values with standard deviations, along with the binding affinity $(\Delta G)$ estimated via + +Table 4. RMSD trajectories from molecular dynamics after 50 ns (average values and standard deviations), along with binding affinities $(\Delta G)$ estimated by running simulations with MM/PBSA. + +
PeptideRMSD (Å)ΔG-MM/PBSA (kcal/mol)
PDB: 3RC4
Linear (test set)2.57±0.51-9.73
Cyclic (ours)1.44±0.23-10.66
PDB: 4J86
Linear (test set)3.37±0.73-15.17
Cyclic (ours)1.56±0.40-20.41
+ +MM/PBSA simulations. The results indicate that cyclic peptides achieve significantly stronger binding affinities than their linear counterparts, thanks to their enhanced stability in the binding conformations. + +# 4.4. Generalization beyond Available Data + +In Fig. 6, we visualize the structural embeddings of peptides generated under different cyclization strategies, along with linear peptides from the test set, using ESM2-650M(Lin et al., 2023) and T-SNE (Van der Maaten & Hinton, 2008). The results reveal distinct clusters corresponding to different cyclization strategies, all of which are clearly separated from the linear peptides. This indicates that CP-Composer generalizes well beyond the available data, effectively exploring unseen regions of cyclic peptides. + +![](images/78cba57b5d137fb9e0c6847eac54aa9b6e3720467b0b208bcdcd2fd8fab9a83d.jpg) +Figure 6. T-SNE visualization of ESM embeddings for peptides in the test set and those generated with different cyclization strategies. + +# 5. Conclusion + +We introduce CP-Composer, a generative framework that enables zero-shot cyclic peptide design via composable geometric constraints. By decomposing complex cyclization patterns into unit constraints, it circumvents the limitation of data, achieves high success rates while preserving fidelity to natural distributions of type and structural statistics, and allows for high-order combinations of cyclization patterns, enabling the design of multi-cycle peptides with customiz + +able strategies. Our framework offers a principled approach to cyclic peptide design, with potential extensions to broader biomolecular applications involving geometric constraints. + +# Acknowledgements + +This work is jointly supported by the National Key R&D Program of China (No.2022ZD0160502), the National Natural Science Foundation of China (No. 61925601, No. 62376276, No. 62276152), Beijing Nova Program (20230484278), China's Village Science and Technology City Key Technology funding, Beijing Natural Science Foundation (No. QY24249) and Wuxi Research Institute of Applied Technologies. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Anand, N. and Achim, T. Protein structure and sequence generation with equivariant denoising diffusion probabilistic models. arXiv preprint arXiv:2205.15019, 2022. 2 +Bao, F., Zhao, M., Hao, Z., Li, P., Li, C., and Zhu, J. Equivariant energy-guided sde for inverse molecular design. arXiv preprint arXiv:2209.15408, 2022. 2, 4, 6, 7 +Bertsekas, D. P. Constrained optimization and Lagrange multiplier methods. Academic press, 2014. 14 +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. 2, 4, 7 +Fosgerau, K. and Hoffmann, T. Peptide therapeutics: current status and future directions. *Drug discovery today*, 20(1): 122–128, 2015. 1 +Genheden, S. and Ryde, U. The mm/pbsa and mm/gbsa methods to estimate ligand-binding affinities. Expert opinion on drug discovery, 10(5):449-461, 2015. 8, 15 +Goldenthal, R., Harmon, D., Fattal, R., Bercovier, M., and Grinspun, E. Efficient simulation of inextensible cloth. In ACM SIGGRAPH 2007 papers, pp. 49-es. 2007. 14 +Han, J., Cen, J., Wu, L., Li, Z., Kong, X., Jiao, R., Yu, Z., Xu, T., Wu, F., Wang, Z., et al. A survey of geometric graph neural networks: Data structures, models and applications. arXiv preprint arXiv:2403.00485, 2024a. 2 + +Han, J., Xu, M., Lou, A., Ye, H., and Ermon, S. Geometric trajectory diffusion models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024b. URL https://openreview.net/forum?id=OYmms5Mv9H.2 +Ho, J. and Salimans, T. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 4, 5 +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. 3, 5 +Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., and Fleet, D. J. Video diffusion models. Advances in Neural Information Processing Systems, 35:8633-8646, 2022. 2 +Hosseinzadeh, P., Watson, P. R., Craven, T. W., Li, X., Rettie, S., Pardo-Avila, F., Bera, A. K., Mulligan, V. K., Lu, P., Ford, A. S., et al. Anchor extension: a structure-guided approach to design cyclic peptides targeting enzyme active sites. Nature Communications, 12(1):3384, 2021. 2 +Huang, L., Chen, D., Liu, Y., Shen, Y., Zhao, D., and Zhou, J. Composer: Creative and controllable image synthesis with composable conditions. arXiv preprint arXiv:2302.09778, 2023. 2 +Ji, X., Nielsen, A. L., and Heinis, C. Cyclic peptides for drug development. Angewandte Chemie International Edition, 63(3):e202308251, 2024. 1 +Jiang, E., Peng, J., Ma, Z., and Yan, X.-B. Ode-dps: Ode-based diffusion posterior sampling for inverse problems in partial differential equation. arXiv preprint arXiv:2404.13496, 2024. 2 +Jorgensen, W. L., Chandrasekhar, J., Madura, J. D., Impey, R. W., and Klein, M. L. Comparison of simple potential functions for simulating liquid water. The Journal of chemical physics, 79(2):926-935, 1983. 8, 15 +Kawar, B., Elad, M., Ermon, S., and Song, J. Denoising diffusion restoration models. Advances in Neural Information Processing Systems, 35:23593-23606, 2022. 2, 4 +Kong, X., Huang, W., and Liu, Y. End-to-end full-atom antibody design. arXiv preprint arXiv:2302.00203, 2023. 3, 5, 13, 15 +Kong, X., Jia, Y., Huang, W., and Liu, Y. Full-atom peptide design with geometric latent diffusion, 2024. URL https://arxiv.org/abs/2402.13555.1, 2, 3, 6, 7, 14, 15 + +Lee, A. C.-L., Harris, J. L., Khanna, K. K., and Hong, J.-H. A comprehensive review on current advances in peptide drug development and design. International journal of molecular sciences, 20(10):2383, 2019. 1 +Li, J., Chen, T., Luo, S., Cheng, C., Guan, J., Guo, R., Wang, S., Liu, G., Peng, J., and Ma, J. Hotspot-driven peptide design via multi-fragment autoregressive extension. arXiv preprint arXiv:2411.18463, 2024a. 1 +Li, J., Cheng, C., Wu, Z., Guo, R., Luo, S., Ren, Z., Peng, J., and Ma, J. Full-atom peptide design based on multimodal flow matching. In *Forty-first International Conference on Machine Learning*, 2024b. 1, 2 +Li, M., Lan, X., Shi, X., Zhu, C., Lu, X., Pu, J., Lu, S., and Zhang, J. Delineating the stepwise millisecond allosteric activation mechanism of the class c gpcr dimer mgl5. Nature Communications, 15(1):7519, 2024c. 8, 15 +Lin, H., Zhang, O., Zhao, H., Jiang, D., Wu, L., Liu, Z., Huang, Y., and Li, S. Z. Ppflow: Target-aware peptide design with torsional flow matching. In *Forty-first International Conference on Machine Learning*, 2024. 1, 2 +Lin, Z., Akin, H., Rao, R., Hie, B., Zhu, Z., Lu, W., Smetanin, N., Verkuil, R., Kabeli, O., Shmueli, Y., et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science, 379(6637): 1123-1130, 2023. 9 +Liu, N., Li, S., Du, Y., Torralba, A., and Tenenbaum, J. B. Compositional visual generation with composable diffusion models. In European Conference on Computer Vision, pp. 423-439. Springer, 2022. 2 +Luo, S., Su, Y., Peng, X., Wang, S., Peng, J., and Ma, J. Antigen-specific antibody design and optimization with diffusion-based generative models for protein structures. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=jSorGn2Tjg.2 +Maier, J. A., Martinez, C., Kasavajhala, K., Wickstrom, L., Hauser, K. E., and Simmerling, C. ff14sb: improving the accuracy of protein side chain and backbone parameters from ff99sb. Journal of chemical theory and computation, 11(8):3696-3713, 2015. 8, 15 +Park, J. and Shen, Y. Equivariant blurring diffusion for hierarchical molecular conformer generation. arXiv preprint arXiv:2410.20255, 2024. 2 +Rettie, S., Juergens, D., Adebomi, V., Bueso, Y. F., Zhao, Q., Leveille, A., Liu, A., Bera, A., Wilms, J., Uffing, A., et al. Accurate de novo design of high-affinity protein + +binding macrocycles using deep learning. bioRxiv, pp. 2024-11, 2024. 1, 2 +Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. 2022 IEEE. In CVF Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, 2021. 2 +Ryckaert, J.-P., Ciccotti, G., and Berendsen, H. J. Numerical integration of the cartesian equations of motion of a system with constraints: molecular dynamics of n-alkanes. Journal of computational physics, 23(3):327-341, 1977. 8, 15 +Sadat, S., Buhmann, J., Bradley, D., Hilliges, O., and Weber, R. M. Cads: Unleashing the diversity of diffusion models through condition-annealed sampling, 2024. URL https://arxiv.org/abs/2310.17347.6, 7 +Salomon-Ferrer, R., Gotz, A. W., Poole, D., Le Grand, S., and Walker, R. C. Routine microsecond molecular dynamics simulations with amber on gpus. 2. explicit solvent particle mesh ewald. Journal of chemical theory and computation, 9(9):3878-3888, 2013. 8, 15 +Satorras, V. G., Hoogeboom, E., and Welling, M. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323-9332. PMLR, 2021. 2 +Schütt, K. T., Sauceda, H. E., Kindermans, P.-J., Tkatchenko, A., and Müller, K.-R. Schnet-a deep learning architecture for molecules and materials. The Journal of Chemical Physics, 148(24), 2018. 5, 13 +Song, B., Kwon, S. M., Zhang, Z., Hu, X., Qu, Q., and Shen, L. Solving inverse problems with latent diffusion models via hard data consistency. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=j8hdRqOUhN.2,4 +Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021a. URL https://openreview.net/forum?id=St1giarCHLP.2 +Song, J., Zhang, Q., Yin, H., Mardani, M., Liu, M.-Y., Kautz, J., Chen, Y., and Vahdat, A. Loss-guided diffusion models for plug-and-play controllable generation. In International Conference on Machine Learning, pp. 32483-32498. PMLR, 2023. 4 +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. 2 + +Song, Y., Shen, L., Xing, L., and Ermon, S. Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005, 2021b. 2 +Swanson, S., Sivaraman, V., Grigoryan, G., and Keating, A. E. Tertiary motifs as building blocks for the design of protein-binding peptides. *Protein Science*, 31(6):e4322, 2022. 2 +Tsaban, T., Varga, J. K., Avraham, O., Ben-Aharon, Z., Khramushin, A., and Schueler-Furman, O. Harnessing protein folding neural networks for peptide-protein docking. Nature communications, 13(1):176, 2022. 6 +Van der Maaten, L. and Hinton, G. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. 9 +Wang, F., Wang, Y., Feng, L., Zhang, C., and Lai, L. Target-specific de novo peptide binder design with diffpepbuilder, 2024a. URL https://arxiv.org/abs/2405.00128.7,8 +Wang, F., Wang, Y., Feng, L., Zhang, C., and Lai, L. Target-specific de novo peptide binder design with diffpepbuilder. Journal of Chemical Information and Modeling, 2024b. 1, 2 +Watson, J. L., Juergens, D., Bennett, N. R., Trippe, B. L., Yim, J., Eisenach, H. E., Ahern, W., Borst, A. J., Ragotte, R. J., Milles, L. F., et al. De novo design of protein structure and function with rfdiffusion. Nature, 620(7976): 1089-1100, 2023. 2 +Xu, M., Yu, L., Song, Y., Shi, C., Ermon, S., and Tang, J. Geodiff: A geometric diffusion model for molecular conformation generation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=PzcvxEMzvQC.2 +Xu, M., Powers, A., Dror, R., Ermon, S., and Leskovec, J. Geometric latent diffusion models for 3d molecule generation. In International Conference on Machine Learning. PMLR, 2023. 2 +Yang, S., He, X., and Zhu, B. Learning physical constraints with neural projections. Advances in Neural Information Processing Systems, 33:5178-5189, 2020. 14 +Ye, H., Lin, H., Han, J., Xu, M., Liu, S., Liang, Y., Ma, J., Zou, J., and Ermon, S. TFG: Unified training-free guidance for diffusion models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=N8YbGX98vc.14 +Yim, J., Trippe, B. L., De Bortoli, V., Mathieu, E., Doucet, A., Barzilay, R., and Jaakkola, T. Se (3) diffusion model + +with application to protein backbone generation. arXiv preprint arXiv:2302.02277, 2023. 2 +Zhang, H. and Chen, S. Cyclic peptide drugs approved in the last two decades (2001-2021). RSC Chemical Biology, 3 (1):18-31, 2022. 1 +Zorzi, A., Deyle, K., and Heinis, C. Cyclic peptide therapeutics: past, present and future. Current opinion in chemical biology, 38:24-29, 2017. 1 + +# A. Proofs + +# A.1. Proof of Theorem 3.3 + +For clarity, we restate Theorem 3.3 below. + +Proposition 3.3 (Injective). Both $f_{T}$ and $f_{D}$ are injective. That is, $f(\mathbb{C}^{1}) = f(\mathbb{C}^{2})$ if and only if $\mathbb{C}^{1} = \mathbb{C}^{2}$ , where $(f, \mathbb{C}^{1}, \mathbb{C}^{2})$ can be $(f_{T}, \mathbb{C}_{T}^{1}, \mathbb{C}_{T}^{2})$ or $(f_{D}, \mathbb{C}_{D}^{1}, \mathbb{C}_{D}^{2})$ . Furthermore, their product function $f(\mathbb{C}_{T}, \mathbb{C}_{D}) \coloneqq (f_{T}(\mathbb{C}_{T}), f_{D}(\mathbb{C}_{D}))$ is also injective. + +To prove Theorem 3.3, we first prove the following lemma. + +Lemma A.2. If $g: \mathbb{R}^J \mapsto \mathbb{R}^K$ is injective, then $f(\mathbb{X}) = \{(i, g(\pmb{k}_i))\}_{i \in \mathcal{V}_{\mathbb{X}}}$ is also injective, where $\mathbb{X} = \{(i, \pmb{k}_i)\}_{i \in \mathcal{V}_{\mathbb{X}}}$ . + +Proof. $f(\mathbb{X}^1) = f(\mathbb{X}^2) \iff \{(i, g(\pmb{k}_i^1))\}_{i \in \mathcal{V}_{\mathbb{X}^1}} = \{(i, g(\pmb{k}_i^2))\}_{i \in \mathcal{V}_{\mathbb{X}^2}} \iff \mathcal{V}_{\mathbb{X}^1} = \mathcal{V}_{\mathbb{X}^2} := \mathcal{V}_{\mathbb{X}}, g(\pmb{k}_i^1) = g(\pmb{k}_i^2), \forall i \in \mathcal{V}_{\mathbb{X}} \iff \mathcal{V}_{\mathbb{X}^1} = \mathcal{V}_{\mathbb{X}^2} := \mathcal{V}_{\mathbb{X}}, \forall i \in \mathcal{V}_{\mathbb{X}} \iff \{(i, \pmb{k}_i^1)\}_{i \in \mathcal{V}_{\mathbb{X}^1}} = \{(i, \pmb{k}_i^2)\}_{i \in \mathcal{V}_{\mathbb{X}^2}} \iff \mathbb{X}^1 = \mathbb{X}^2$ , where the third deduction step leverages the injectivity of function $g$ . + +Now we are ready to prove Theorem 3.3. + +Proof. We first prove the injectivity of $f_{T}$ . We choose $g$ to be the one-hot encoding function $\mathrm{One - hot}(\cdot):\mathbb{R}\mapsto \mathbb{R}^{K}$ . It is straightforward that this function is injective. By leveraging Lemma A.2, the proof is completed. + +For the injectivity of $f_{D}$ , similarly we instantiate $g$ as the RBF feature map $\phi(\cdot): \mathbb{R} \mapsto \mathbb{R}^{\infty}$ . Such map is injective, since $\| \phi(d_1) - \phi(d_2) \|^2 = < d_1, d_1 > + < d_2, d_2 > -2 < d_1, d_2 >= 1 + 1 - 2 \exp(-\gamma \| d_1 - d_2 \|^2)$ , which implies $\phi(d_1) = \phi(d_2) \iff d_1 = d_2$ , hence injectivity. By leveraging Lemma A.2, the proof is completed. + +Since both $f_{T}$ and $f_{D}$ are injective, $(f_{T}(\mathbb{C}_{T}^{1}), f_{D}(\mathbb{C}_{D}^{1})) = (f_{T}(\mathbb{C}_{T}^{2}), f_{D}(\mathbb{C}_{D}^{2})) \iff f_{T}(\mathbb{C}_{T}^{1}) = f_{T}(\mathbb{C}_{T}^{2}), f_{D}(\mathbb{C}_{D}^{1}) = f_{D}(\mathbb{C}_{D}^{2}) \iff \mathbb{C}_{T}^{1} = \mathbb{C}_{T}^{2}, \mathbb{C}_{D}^{1} = \mathbb{C}_{D}^{2} \iff (\mathbb{C}_{T}^{1}, \mathbb{C}_{D}^{1}) = (\mathbb{C}_{T}^{2}, \mathbb{C}_{D}^{2})$ . Therefore the product function $\tilde{f}(\mathbb{C}_T, \mathbb{C}_D) \coloneqq (f_T(\mathbb{C}_T), f_D(\mathbb{C}_D))$ is also injective, which concludes the proof. + +# A.2. Equivariance + +Proposition A.3 (Equivalence). The conditional score $\epsilon_{\theta}(\mathcal{G}_{\mathbf{z}}^{(t)},\mathbb{C},t)$ is $E(3)$ -equivariant, where $\mathbb{C}$ is $\mathbb{C}_T$ or $\mathbb{C}_D$ . + +The proof is straightforward since our encodings of $\mathbb{C}_T$ and $\mathbb{C}_D$ are both E(3)-invariant, therefore the E(3)-equivariance of the score is preserved, following the proof in Kong et al. (2023). + +# B. Decompositions of Cyclic Strategies + +As illustrated in Fig. 1, cyclic peptides are looped by four strategies, each of which can be decomposed into unit geometric constraints defined in Sec. 3.2 as follows. Specifically, the pair $(i,l_i)$ indicates a type constraint that node $i$ is required to be type $l_i$ , and the triplet $(i,j,d_{ij})$ means a distance constraint that the pairwise distance between node $i,j$ should be $d_{ij}$ . + +Stapled peptide. Given a lysine (K) located at index $i$ , a stapled peptide can be formed via a covalent linkage between the lysine and either an aspartic acid (D) at $i + 3$ , with constraints as + +$$ +\mathbb {C} _ {\text {S t a p l e d - D}, i} = \left(\left\{\left(i, \mathrm {K}\right), (i + 3, \mathrm {D}) \right\}, \left\{\left(i, i + 3, d _ {K D}\right) \right\}\right), \tag {11} +$$ + +or a glutamic acid (E) at $i + 4$ , with constraints as + +$$ +\mathbb {C} _ {\text {S t a p l e d - E}, i} = \left(\{(i, \mathrm {K}), (i + 4, \mathrm {E}) \}, \{(i, i + 4, d _ {K E}) \}\right), \tag {12} +$$ + +where $d_{KD}, d_{KE}$ are the lengths of covalent linkages between the K-D and K-E pairs, respectively. + +Head-to-tail peptide. Given a peptide composed of $N$ amino acids indexed by $0,1,\dots ,N - 1$ , an additional amide bond is linked between the head and tail amino acid as + +$$ +\mathbb {C} _ {\text {H e a d - t o - t a i l}} = \left(\{\}, \{(0, N - 1, d _ {P}) \}\right), \tag {13} +$$ + +where $d_P$ is the length of the amide bond. + +Disulfide peptide. Connecting two non-adjacent cysteines (C) at $i, j$ with a disulfur bond, a disulfide peptide is constrained by + +$$ +\mathbb {C} _ {\text {D i s u l f i d e}, i, j} = \left(\{(i, \mathbf {C}), (j, \mathbf {C}) \}, \{(i, j, d _ {S}) \}\right), \tag {14} +$$ + +where $d_S$ is the length of the disulfur bond. + +Bicycle peptide To link the three cysteines (C) at $i,j,k$ , a bicycle peptide is constrained by + +$$ +\mathbb {C} _ {\text {B i c y c l e}, i, j, k} = \left(\left\{\left(i, \mathbf {C}\right), (j, \mathbf {C}), (k, \mathbf {C}) \right\}, \left\{\left(i, j, d _ {T}\right), (i, k, d _ {T}), (j, k, d _ {T}) \right\}\right), \tag {15} +$$ + +where $d_{T}$ is the side length of the equilateral triangle formed by the centered 1,3,5-trimethylbenzene. + +# C. Implementation Details + +# C.1. Energy-based classifier guidance + +With the definition of the geometric constraints, we now introduce their corresponding energy function, a scalar function that evaluates the satisfaction of the constraint given the input geometric graph. + +Definition C.1 (Energy function of a constraint). An energy function of constraint $\mathbb{C}$ is a differentiable function $g_{\mathbb{C}}(\cdot):$ $\mathcal{X}\mapsto \mathbb{R}_{\geq 0}$ , such that $g_{\mathbb{C}}(\mathcal{G}) = 0$ if $\mathcal{G}\in \mathcal{X}$ satisfies the constraint $\mathbb{C}$ and $g_{\mathbb{C}}(\mathcal{G})\neq 0$ otherwise. + +Intuitively, the energy function serves as an indicator of constraint satisfaction, following the conventional way of handling equality constraints (Bertsekas, 2014). + +One naive way to tackle inverse problem is to directly optimize the energy function (Yang et al., 2020; Goldenthal et al., 2007) of the constraint with respect to the initial latents $\mathcal{G}_z^{(T)}$ , since its minima correspond to the data points $\mathcal{G}$ that satisfy the constraint. However, the large number of sampling steps $T$ required by diffusion models makes the optimization computationally prohibitive, as the gradient needs to be backpropagated through the denoiser $T$ times. Moreover, the energy function is not guaranteed to be convex, which further troubles the optimization. + +Energy-based classifier guidance has been introduced to inject constraint as guidance of diffusion sampling in a soft and iterative manner. In our setting, we can pair up $p_t(\mathbb{C}|\mathcal{G}_z)$ and the energy function through Boltzmann distribution, i.e., $p_t(\mathbb{C}|\mathcal{G}_z) = \exp(-g_{\mathbb{C}}(\mathcal{D}_{\xi}(\mathcal{G}_z))) / Z$ , where $Z$ is the normalizing constant. In this way, we have, + +$$ +\nabla_ {\mathcal {G} _ {z}} \log p _ {t} (\mathcal {G} _ {z} | \mathbb {C}) = \nabla_ {\mathcal {G} _ {z}} \log p _ {t} (\mathcal {G} _ {z}) - w \nabla_ {\mathcal {G} _ {z}} g _ {\mathbb {C}} \left(\mathcal {D} _ {\xi} (\mathcal {G} _ {z})\right), \tag {16} +$$ + +where $w \in \mathbb{R}$ is added to control the guidance strength. Performing such sampling procedure is equivalent to sampling from the posterior (Ye et al., 2024): + +$$ +p \left(\mathcal {G} _ {z} \mid \mathbb {C}\right) := p \left(\mathcal {G} _ {z}\right) \exp \left(- w g _ {\mathbb {C}} \left(\mathcal {D} _ {\xi} \left(\mathcal {G} _ {z}\right)\right)\right) / Z, \tag {17} +$$ + +which concentrates the density more on the regions with lower energy function value, biasing the sampling towards data points better satisfying the constraint $\mathbb{C} = (\mathbb{C}_T,\mathbb{C}_D)$ . + +In our implementation, we adopt the guidance function in Kong et al. (2024) as the energy function $g_{\mathbb{C}}$ . In particular, the choice of $w$ significantly influences the generation quality. A larger $w$ typically enhances control strength but degrades generation quality when becoming excessively large. To strike a balance between controllability and quality, we conduct a sweep across various $w$ values and ultimately employ $w \in \{10, 30, 50\}$ for energy-based classifier guidance. The best performance across different $w$ values is reported for all conditions. + +# C.2. Distance Constraints as Edge-Level Control + +To inject the edge-level control into the model, we apply the adapter mechanism by adding an additional dyMEAN block (Kong et al., 2023) to each layer, and changing the message passing process into + +$$ +\left\{\left(\boldsymbol {h} _ {i} ^ {(l + 0. 5)}, \vec {\boldsymbol {X}} _ {i} ^ {(l + 0. 5)}\right) \right\} _ {i \in \mathcal {V}} = \operatorname {A M E} \left(\left\{\left(\boldsymbol {h} _ {i} ^ {(l)}, \vec {\boldsymbol {X}} _ {i} ^ {(l)}\right) \right\} _ {i \in \mathcal {V}}, \left\{\boldsymbol {d} _ {i j} \right\} _ {(i, j) \in \mathcal {E} _ {D}}, \mathcal {E} _ {D}\right), \tag {18} +$$ + +$$ +\left\{\left(\boldsymbol {h} _ {i} ^ {(l + 1)}, \vec {\boldsymbol {X}} _ {i} ^ {(l + 1)}\right) \right\} _ {i \in \mathcal {V}} = \operatorname {A M E} \left(\left\{\left(\boldsymbol {h} _ {i} ^ {(l + 0. 5)}, \vec {\boldsymbol {X}} _ {i} ^ {(l + 0. 5)}\right) \right\} _ {i \in \mathcal {V}}, \emptyset , \mathcal {E}\right), \tag {19} +$$ + +where $\mathcal{E}_D\subseteq \mathcal{E}$ is the set of constrained edges, and AME is the Adaptive Multi-Channel Equivariant layer proposed in Kong et al. (2023). Readers are referred to the original paper for further details. + +# C.3. Molecular Dynamics + +We perform molecular dynamics (MD) simulations to assess the stability and binding affinity of linear peptides from the test set and cyclic peptides generated by our model. Simulations are conducted using the Amber22 package with the CUDA implementation of particle-mesh Ewald (PME) MD and executed on GeForce RTX 4090 GPUs (Salomon-Ferrer et al., 2013). For system preparation, the ff14SB force field is applied to proteins and peptides (Maier et al., 2015). All systems are solvated to a $10\AA$ truncated octahedron transferable intermolecular potential three-point (TIP3P) water box and $150nM$ $\mathrm{Na}^{+}/\mathrm{Cl}^{-}$ counterions are added to neutralize charges and simulate the normal saline environment (Jorgensen et al., 1983; Li et al., 2024c). Prior to equilibration, two rounds of energy minimization are performed: the first relaxes solvent molecules and $\mathrm{Na}^{+}/\mathrm{Cl}^{-}$ counterions while keeping all other atoms fixed, and the second relaxes all atoms without constraints. The systems are then gradually heated from $0\mathrm{K}$ to $310\mathrm{K}$ over $500~\mathrm{ps}$ under harmonic restraints of $10\mathrm{kcal}\cdot \mathrm{mol}^{-1}\cdot \mathring{\mathrm{A}}^{-2}$ on proteins and peptides. Subsequently, equilibration is carried out at $300\mathrm{K}$ and 1 bar under NPT conditions, with harmonic restraints on protein and ligand atoms progressively reduced from 5.0 to 3.0, 1.0, 0.5, and finally $0.1\mathrm{kcal}\cdot \mathrm{mol}^{-1}\cdot \mathring{\mathrm{A}}^{-2}$ spanning a total of 2.5 ns. Production simulations are performed with temperature $(300\mathrm{K})$ and pressure (1 bar) using the Langevin thermostat and Berendsen barostat, respectively. The SHAKE algorithm is applied to constrain covalent bonds involving hydrogen atoms (Ryckaert et al., 1977), while non-bonded interactions are truncated at $10.0\AA$ , with long-range electrostatics treated using the PME method. To estimate peptide binding energies, we further employ MM/PBSA calculations (Genheden & Ryde, 2015). While MD simulations provide high accuracy in evaluating conformational stability and binding affinity, they are computationally expensive. Therefore, we randomly select two target proteins from the test set and generate one cyclic peptide using head-to-tail and disulfide bond cyclization strategies for evaluation. + +# C.4. Hyperparameter details + +We train CP-Composer on a 24G memory RTX 3090 GPU with AdamW optimizer. For the autoencoder, we train for up to 100 epochs and save the top 10 models based on validation performance. We ensure that the total number of edges (scaling with the square of the number of nodes) does not exceed 60,000. The initial learning rate is set to $10^{-4}$ and is reduced by a factor of 0.8 if the validation loss does not improve for 5 consecutive epochs. Regarding the diffusion model, we train for no more than 1000 epochs. The learning rate is $10^{-4}$ and decay by 0.6 and early stop the training process if the validation loss does not decrease for 10 epochs. During the training process, we set the guidance strength as 1 for sampling at the validation stage. The structure details of the autoencoder and the diffusion model are the same as Kong et al. (2024). For the RBF kernel, we use 32 feature channels. + +# D. Further Analysis + +# D.1. Necessity of RBFs + +We evaluate the influence of the RBFs to the quality of the generation of peptide under most difficult setting: Bicycle peptide (26 samples in test set). In Table 5, Based on the validation and parameter sensitivity study, we can conclude the necessity of RBF design to support the distance control. Further, an saturation beyond 16 channels is observed, indicating that finite RBFs is enough for empirical performance. + +# D.2. Generation efficiency + +In Table 6, we show the runtime comparison between our method and the DiffPepBuilder when they both use a 24GB RTX3090 GPU. + +Table 5. Success rates among different number of RBFs + +
Succ.(w=2)Bicycle peptide
RBFs=026.92%
RBFs=1630.76%
RBFs=3230.76%
+ +Table 6. Runtime of our method and DiffPepBuilder + +![](images/d262880e608f648477dd5782b56141e6b6d003d3f75306b980fe81fed0ae6a7d.jpg) +Figure 7. Four types of generated cyclic peptides, with the red boxes highlighting the position for cyclization. + +# E. Additional Visualizations + +In Fig. 7, we show more cases of the stapled, Head-to-tail, disulfur and bicycle peptide. + +# F. Code Availability + +The codes for our CP-Composer is provided at the link https://github.com/jdp22/CP-Composer_final. \ No newline at end of file diff --git a/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/images.zip b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..621409ed38258fdf99bf49c065dfb3cc36c1717d --- /dev/null +++ b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61afffc20132467b53e43e134f92c768e5f957a55fb881fdc99e06f98196cdfd +size 893865 diff --git a/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/layout.json b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f085335b4e660201e58ab95ac92e72acccb5e0ab --- /dev/null +++ b/ICML/2025/Zero-Shot Cyclic Peptide Design via Composable Geometric Constraints/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ead846565a3f928b6cf8400896f13695497fb8e7f8740ccb3e2ba116579542a +size 698215 diff --git a/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_content_list.json b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5da3c7c7ac76f218dd21160a0cb3518934f1f250 --- /dev/null +++ b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26225f0b4f7b1b1d1ed5b634ea4e06b68a1e36bec030222602643b726b3d857d +size 219893 diff --git a/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_model.json b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bba481c9186de71a43ebea63f22711a15cb177a6 --- /dev/null +++ b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d78db9f1722784ab2bc2b7d220778d8b502fb13aa22d2fadfedc4f7d2e14f34 +size 257480 diff --git a/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_origin.pdf b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9c05ef7d3d454875beb20e3af367a8084cc466cf --- /dev/null +++ b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/da9fbf2a-a1c3-4f09-b587-cf7ab97be4b0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec5dfe53ad9be595dd90676610e7ff612ad9f1babe2fb67d399b232007891657 +size 1545841 diff --git a/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/full.md b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a6f37e93a57d1b64a08bc36f544b407fe30440ce --- /dev/null +++ b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/full.md @@ -0,0 +1,927 @@ +# Zero-Shot Generalization of GNNs over Distinct Attribute Domains + +Yangyi Shen1 Jincheng Zhou2 Beatrice Bevilacqua2 Joshua Robinson1 Charilaos Kanatsoulis1 Jure Leskovec1 Bruno Ribeiro2 + +# Abstract + +Traditional Graph Neural Networks (GNNs) cannot generalize to new graphs with node attributes different from the training ones, making zero-shot generalization across different node attribute domains an open challenge in graph machine learning. In this paper, we propose STAGE, which encodes statistical dependencies between attributes rather than individual attribute values, which may differ in test graphs. By assuming these dependencies remain invariant under changes in node attributes, STAGE achieves provable generalization guarantees for a family of domain shifts. Empirically, STAGE demonstrates strong zero-shot performance on medium-sized datasets: when trained on multiple graph datasets with different attribute spaces (varying in types and number) and evaluated on graphs with entirely new attributes, STAGE achieves a relative improvement in Hits@1 between $40\%$ to $103\%$ in link prediction and a $10\%$ improvement in node classification compared to state-of-the-art baselines. + +# 1. Introduction + +Zero-shot generalization refers to the ability of the model to handle unseen test data without additional training or adaptation (Larochelle et al., 2008; Xian et al., 2017; Wang et al., 2022). An essential prerequisite for zero-shot generalization is a unified input space where models can learn and transfer prediction patterns across domains. While this challenge has been addressed in areas like natural language through tokenization techniques that represent any text through a fixed vocabulary (Samuel & Øvrelid, 2023), graphs present unique challenges in achieving such unified input space. + +Attributes in graphs can vary significantly across domains. + +$^{1}$ Department of Computer Science, Stanford University, Stanford, USA $^{2}$ Department of Computer Science, Purdue University, West Lafayette, USA. Correspondence to: Yangyi Shen . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +Node attributes in test graphs can differ from those in training graphs in four key ways: (1) their types (e.g., continuous vs. categorical variables); (2) their names (e.g., RAM specifications in ecommerce graphs and clothing size in retail graphs, as illustrated in Figure 1); (3) their semantics, where attributes with the same name can have different meanings across domains – for instance, the meaning of size differs substantially between electronics and clothing domains; (4) their cardinality, as graphs may contain varying numbers of node attributes. These challenges make it difficult to define a unified input space that enables zero-shot generalization to unseen attributed graphs. + +For these reasons, training graph models that can zero-shot generalize to new graphs with unseen attribute domains remains an open challenge. Recent approaches address this problem using various strategies. One approach is to ignore node attributes to focus solely on graph topology, but this strategy may be leaving valuable node attribute information unutilized. Another line of work seeks to unify input spaces by converting graphs and attributes into text representations, which are then processed by pretrained text encoders (Chen et al., 2024a; Huang et al., 2023; Liu et al., 2024; Zhang et al., 2023). While promising, these approaches may struggle with numerical attributes (Collins et al., 2024; Gruver et al., 2024; Schwartz et al., 2024). Recently, Zhao et al. (2024b) proposed an analytical approach for making predictions on new graphs with potentially new attributes. However, this approach sidesteps the fundamental challenge of creating a unified input space. + +In this paper, we introduce STAGE (Statistical Transfer for Attributed Graph Embeddings), which transforms node attributes from their "absolute" natural space into a relative space that captures statistical dependencies between attributes. For instance, as illustrated in Figure 1, these dependencies manifest themselves as correlations driving purchases across domains, which remain invariant even when the purchased items and their attributes change. In practice, STAGE represents such statistical dependencies through a two-step process that transforms node attributes into fixed-dimensional edge embeddings, achieving a unified input space alongside provable invariance to changes in attribute values (including their types, names and semantics), as well as to permutations of attribute order and permuta + +![](images/b2cf0d1a29c3332614c75052f47f8af93e155de7cf7aa7495137c02a5371fbd9.jpg) +(a) Train + +![](images/67bf363d9552a574b19443ff6602677a70df1182ea1259006b3949e3c886301d.jpg) +(b) Test +Figure 1: The task of zero-shot generalization to attributed graphs with unseen attributes. Attributes in test are different than those in train in types and semantics, but attributes associated with an edge are highly correlated in both train and test (e.g. income level is positively correlated to phone price in (a) and height is positively correlated to size in (b)). Our STAGE learns these statistical dependencies among attributes to perform zero-shot transfer across distinct attribute domains. + +tions of node identities. Specifically, STAGE first constructs a weighted STAGE-edge-graph for each edge in the input graph, where the nodes represent attributes of the edge endpoints and the edge weights capture dependencies between the attributes. Then, STAGE uses an additional shallow GNN to generate embeddings for each STAGE-edge-graph. Finally, STAGE applies the original GNN to a modified input graph, which contains only the newly generated edge embeddings but not the node attributes. + +The complexity of STAGE is linear in the size of the input graph and quadratic in the number of attributes, as it captures pairwise statistical dependencies between attributes over the edges of the graph. This makes STAGE particularly well-suited for small to medium-sized datasets, where it strikes a balance between computational feasibility and strong generalization performance. + +We prove that STAGE can learn domain-independent representations for certain types of domain shifts, enabling zero-shot generalization. Experimentally, for link prediction in e-commerce networks spanning six distinct product domains, STAGE achieves up to $103\%$ improvement in Hits@1 compared to the strongest baseline. In node classification tasks on social networks, STAGE achieves approximately $10\%$ better performance than the strongest baseline. + +# 2. STAGE + +Let $G = (V, E, \mathbf{X})$ an attributed graph, where $V$ is the set of nodes, $E$ the set of edges, and $\mathbf{X} = \{\mathbf{x}^v\}_{v \in V}$ the set of node attributes $\mathbf{x}^v$ for each node $v \in V$ . We assume that all $\mathbf{x}^v$ belong to some measurable space of dimension $d \geq 1$ . + +To design a model capable of generalizing to test graphs that may have node attributes living in a different space than $\mathbf{X}$ , we propose a projection map that transforms the node attributes $(\pmb{x}^u, \pmb{x}^v)$ of the endpoints of an edge $(u, v) \in E$ + +into a fixed-dimensional pairwise embedding + +$$ +\mathcal {P}: \left(\boldsymbol {x} ^ {u}, \boldsymbol {x} ^ {v}\right) \mapsto \boldsymbol {r} ^ {u v} \in \mathbb {R} ^ {k}, \quad k \geq 1. \tag {1} +$$ + +By using pairwise embeddings, STAGE can model relationships between attributes belonging to different nodes. For instance, it can capture the relation between the attributes of the customer node Alice and the attributes of the product node Phone1 in Figure 2(a), such as the correlation between income level and price. We design the mapping $\mathcal{P}$ by building a graph based on the pairwise pdf attribute descriptors. Viewing node attributes through their pdfs maps potentially non-aligned node attribute spaces into a universal space of densities, enabling consistency across diverse domains. The modeling of probabilities generalizes the learning of rules like "people with higher income level tend to buy expensive phones," to abstract relationships like "high values in $X_{1}$ correlate with high values in $X_{2}$ ", enabling knowledge transfer across domains with different attributes. + +Concretely, let A and B be a random pair of nodes jointly and uniformly sampled from the edge set, $(\mathrm{A},\mathrm{B})\sim$ Unif(E). Let $\mathbf{x}_i^{\mathrm{A}}$ denote the random variable of the $i$ -th attribute value of random node A, and $\mathbf{x}_j^{\mathrm{B}}$ the $j$ -th attribute value of random node B. Given a specific pair of distinct nodes $u,v\in V$ and specific attribute values $x_{i}^{u}$ and $x_{j}^{v}$ , we define $p(x_i^u |x_j^v)$ from the conditional probabilities as follows, accounting for mixture of totally ordered (e.g., scalar) and unordered (e.g., categorical) attributes: + +- $p(x_i^u | x_j^v) \coloneqq \mathbb{P}(\mathbf{x}_i^{\mathrm{A}} \leq x_i^u | \mathbf{x}_j^{\mathrm{B}} \leq x_j^v)$ , if both attribute $i$ and $j$ are totally ordered. +- $p(x_i^u | x_j^v) \coloneqq \mathbb{P}(\mathrm{x}_i^{\mathrm{A}} = x_i^u | \mathrm{x}_j^{\mathrm{B}} \leq x_j^v)$ , if attribute $i$ is un-ordered and attribute $j$ is totally ordered. +- $p(x_i^u | x_j^v) \coloneqq \mathbb{P}(\mathbf{x}_i^{\mathbf{A}} \leq x_i^u | \mathbf{x}_j^{\mathbf{B}} = x_j^v)$ , if attribute $i$ is totally ordered and attribute $j$ is unordered. + +![](images/3841d7cf68d8de16c2701f30af359181b5285f8a268de61948807b24e49e1d12.jpg) +Figure 2: Given an input attributed graph $G$ (a), STAGE builds a STAGE-edge-graph (b) for every edge in $G$ . Nodes in a STAGE-edge-graph correspond to attributes of the two edge endpoints, and the node and edge attributes are the empirical marginal and conditional probabilities of attribute values (Equations (2) and (3)). STAGE applies the intra-edge GNN on STAGE-edge-graphs (b) to obtain an edge embedding for each input graph edge, and then applies the inter-edge GNN on the modified graph containing these edge embeddings but not the node attributes (c). Details are provided in Algorithms 1 and 2. + +- $p(x_i^u | x_j^v) \coloneqq \mathbb{P}(\mathrm{x}_i^{\mathrm{A}} = x_i^u | \mathrm{x}_j^{\mathrm{B}} = x_j^v)$ , if both attribute $i$ and $j$ are unordered. + +If $u = v$ , we change the sampling distribution to (A) $\sim$ Unif(V) and let $\mathbf{B} = \mathbf{A}$ so that STAGE can also model dependencies between attributes of the same node. If $i = j$ , we change the conditional probability to $p(x_i^u) \coloneqq \mathbb{P}(\mathbf{x}_i^A = x_i^u)$ if attribute $i$ is unordered and $p(x_i^u) \coloneqq \mathbb{P}(\mathbf{x}_i^A \leq x_i^u)$ if attribute $i$ is totally ordered. This allows STAGE to also model each attribute independently through its pdf or cdf. + +In practice, these probabilities can be empirically estimated from the input data. For the node-pair $u, v$ we define a conditional probability matrix $S^{uv}$ , with indices $i, j \in \{1, \dots, 2d\}$ , $i \neq j$ , organized such that indices 1 to $d$ correspond to attributes of node $u$ and indices $d + 1$ to $2d$ correspond to attributes of node $v$ : + +$$ +\boldsymbol {S} _ {i j} ^ {u v} = \left\{ \begin{array}{l l} p \left(x _ {i} ^ {u} \mid x _ {j} ^ {u}\right) & \text {i f} i \leq d \text {a n d} j \leq d, \\ p \left(x _ {i - d} ^ {v} \mid x _ {j - d} ^ {v}\right) & \text {i f} d < i \leq 2 d \text {a n d} d < j \leq 2 d, \\ p \left(x _ {i} ^ {u} \mid x _ {j - d} ^ {v}\right) & \text {i f} i \leq d \text {a n d} d < j \leq 2 d, \\ p \left(x _ {i - d} ^ {v} \mid x _ {j} ^ {u}\right) & \text {i f} d < i \leq 2 d \text {a n d} j \leq d. \end{array} \right. \tag {2} +$$ + +and for the diagonal $i = j$ we define, + +$$ +\boldsymbol {S} _ {i j} ^ {u v} = \left\{ \begin{array}{l l} p \left(x _ {i} ^ {u}\right) & \text {i f} i \leq d, \\ p \left(x _ {i} ^ {v}\right) & \text {i f} i > d, \end{array} \right. \tag {3} +$$ + +The matrix $S^{uv}$ is the core node-pair data representation STAGE uses. This matrix is used to define a graph structure which we call a STAGE-edge-graph, illustrated in Figure 2(b), which captures, for the pair of nodes $u$ and $v$ , the interactions among all pairs of attributes. + +Definition 2.1 (STAGE-edge-graph). Given a pair of nodes $u, v \in V$ , a STAGE-edge-graph for $(u, v)$ is a fully connected, weighted, directed graph $G(S^{uv})$ with $2d$ nodes, where node $i$ has a scalar attribute $S_{ii}^{uv}$ , and edge $(i, j)$ has a scalar attribute $S_{ij}^{uv}$ . + +STAGE algorithm. As illustrated in Figures 2(b) and 2(c), STAGE uses a STAGE-edge-graph for each edge in the input graph in a two-stage process to produce attribute-domaintransferable representations. First, STAGE uses a GNN to obtain embeddings for each STAGE-edge-graph. These edge embeddings replace the original node attributes, resulting in a modified graph which is fed into a second GNN to solve the overall task, producing node, link, or graph representation. The two steps of STAGE are as follows: + +1. (Intra-edge) Each $G(S^{uv})$ is processed with a GNN $M_1$ to produce edge-level embeddings $r^{uv} = M_1(G(S^{uv}))$ . +2. (Inter-edge) A second GNN $M_2$ processes the modified graph $G' = (V, E, \{r^{uv}\}_{(u,v) \in E})$ , i.e., the original graph without node attributes, but equipped with the learned edge embeddings to give a final representation $M(G) \coloneqq M_2(G')$ . + +The two GNNs $M_{1}$ and $M_{2}$ are trained end-to-end on the task. Note that $M_{1}$ can be any GNN designed to produce whole-graph embeddings and can take single-dimensional edge attributes, whilst $M_{2}$ can be any GNN that can take edge embeddings as input. + +Integration with language models. While STAGE can incorporate LLM embeddings for textual attributes, our experiments show STAGE-edge-graphs performs better on numerical and categorical data (Section 4). The approaches can be complementary - initialize node embeddings with + +LLM embeddings for textual attributes and edge embeddings with STAGE-edge-graphs for non-textual attributes. + +Modelling pairwise relations. $S^{uv}$ is only computed for edges $(u, v)$ , and so can only model pairwise relations between nodes connected by an edge. In some cases, such as bipartite graphs, we find it beneficial to add extra edges between nodes of the same type (see Section 4 for details). In general, higher-order relations could also be modelled similarly, albeit at increased complexity. We leave exploration of higher-order relations to future work. + +# 3. Statistical Underpinnings of STAGE + +This section explains how STAGE achieves domain transferability. The central result is to show that STAGE generates representations capable of measuring dependencies among node attributes in graphs. This means that STAGE can ignore "absolute" attribute values, while still generalizing through analogous statistical dependencies of the attributes. + +Our first step (Section 3.1) connects measures of statistical dependencies with a novel graph regression task. Then, Section 3.2 shows that our STAGE-edge-graphs (Definition 2.1) can lead to a compact model for this regression, with a variant that is invariant to a class of shifts between train and test attribute domains. The following theoretical results are meant to provide insights and are restricted to domains with a fixed number of attributes to simplify the proofs, extending them to variable size spaces is left as future work. Detailed proofs are provided in Appendix B. + +# 3.1. Statistical Dependence as Graph Regression + +We begin by introducing the framework for building what we call feature hypergraphs. We will show that feature hypergraphs can sufficiently encapsulate the statistical dependencies between attributes, while only leveraging the relative orders rather than the numerical values of the attribute, enabling it to be invariant to order-preserving transformations (formally defined in Definition B.2) to achieve domain transferability. In the following, we assume one attribute space defined over a totally ordered set (e.g., $\mathbb{R}^d$ for $d\geq 1$ , where the total order $\leq \tau$ is well defined), since the invariances of unordered sets are a special case (as these do not need order-preserving transformations). Before we describe how feature hypergraphs are built, we start with the concept of order statistic, which captures the relative ordering of the attribute values. + +Order statistic (David & Nagaraja, 2004). Let $\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_m$ be a sequence of $m \geq 2$ random variables from some unknown distribution $F$ over a totally ordered set (e.g., a convex set $\mathbb{F} \subseteq \mathbb{R}$ ). Its order statistics are defined as the sorted values $\mathbf{x}_{(1)} \leq \mathbf{x}_{(2)} \leq \dots \leq \mathbf{x}_{(m)}$ , where $\mathbf{x}_{(k)}$ denotes the $k$ -th smallest value in the $m$ samples. + +Consider a domain with $m$ entities (e.g., products in an appliance store), where each entity is characterized by $d$ attributes. Specifically, an entity $u$ can be represented by a (row) vector of random attribute variables, $\mathbf{x}^u = [\mathbf{x}_1^u,\mathbf{x}_2^u,\dots ,\mathbf{x}_d^u ]$ , where $\mathbf{x}_i^u$ describes the $i$ -th attribute of entity $u$ that takes on values from the $i$ -th attribute space $\mathbb{F}_i\subseteq \mathbb{R}$ . With these variables, we define the (random) matrix $\mathbf{X}\coloneqq [(\mathbf{x}^{1})^{T},(\mathbf{x}^{2})^{T},\ldots ,(\mathbf{x}^{m})^{T}]^{T}$ of shape $m\times d$ . Alternatively, we can view $\mathbf{X}$ column-wise, where each attribute $i$ corresponds to a (column) random vector $\mathbf{x}_i = [\mathbf{x}_i^1,\mathbf{x}_i^2,\dots ,\mathbf{x}_i^m ]^T$ . Next, we introduce the order statistic for these attributes: let $\mathbf{x}_{i(k)}$ denote the $k$ -th order statistic of $\{\mathbf{x}_i^1,\dots ,\mathbf{x}_i^m\}$ . For instance, $\mathbf{x}_{i(1)} = \min \{\mathbf{x}_i^1,\dots ,\mathbf{x}_i^m\}$ . + +Given an input graph $G = (V,E,\mathbf{X})$ , we regard it as a sample from some unknown distribution over all attributed graphs with $m$ entities and $d$ attributes, where $\mathbf{X}$ is a random variable with $\mathbf{X} = [\pmb{x}_1,\dots,\pmb{x}_d]$ . Consider the edges in $E$ as samples of pairs of nodes that give rise to the multiset of attributes of the endpoint nodes, $\mathcal{E} = \{\{(x^u,x^v)\mid (u,v)\in E\}\}$ . Together with the order statistics, we now define the attribute hypergraph as follows: + +Definition 3.1 (Attribute hypergraph $\mathcal{F}_{\mathcal{E}}$ ). Given a multiset of attributes of the endpoint nodes $\mathcal{E} = \{(x^u, x^v) \mid (u, v) \in E\}$ of $m$ entities with totally ordered attribute spaces, the feature hypergraph $\mathcal{F}_{\mathcal{E}}$ is defined as follows. First, we label the graph with $m$ . Then, + +- For each order statistic $\pmb{x}_{i(k)}$ of attribute $i$ and order $k$ ( $1 \leq k \leq m$ ), there are 2 nodes, namely $(i,k,1)$ and $(i,k,2)$ . In total, there are exactly $2md$ nodes in $\mathcal{F}_{\mathcal{E}}$ (attribute values need not be unique). Nodes $(i,k,1)$ and $(i,k,2)$ store a single attribute to mark their order: $k$ . + +- Let $o_i(u)$ be the order of the attribute value $x_i^u$ , i.e., $x_{i(o_i(u))} = x_i^u$ . For each pair of attributes of endpoint nodes $(x^u, x^v) \in \mathcal{E}$ , there is a hyperedge $H_{uv}$ in $\mathcal{F}_{\mathcal{E}}$ defined as + +$$ +\begin{array}{l} H _ {u v} := \left\{\left(1, o _ {1} (u), 1\right), \left(1, o _ {1} (v), 2\right), \right. \\ (2, o _ {2} (u), 1), (2, o _ {2} (v), 2), \dots , \\ \left. \left(d, o _ {d} (u), 1\right), \left(d, o _ {d} (v), 2\right) \right\}. \tag {4} \\ \end{array} +$$ + +Our first observation is that the feature hypergraph in Definition 3.1 perfectly captures the order statistics of the set $\mathcal{E}$ but discards the actual values of the attributes. + +We now consider statistical tests that measure dependencies of the attributes of endpoint nodes. As an example, consider that if $(\mathbf{x}^u,\mathbf{x}^v)\in \mathcal{E}$ are samples (not necessarily independently sampled) from a bivariate distribution $(\mathbf{x},\mathbf{x}^{\prime})\sim F$ one may be interested in testing the hypothesis + +$$ +H _ {0}: F (\mathbf {x}, \mathbf {x} ^ {\prime}) = F _ {1} (\mathbf {x}) F _ {2} (\mathbf {x} ^ {\prime}), +$$ + +i.e., that $\mathbf{x}$ and $\mathbf{x}'$ are independent. Bell (1964); Berk & Bickel (1968) showed that over totally ordered sets, measures (e.g., $p$ -values) of such hypothesis tests for pairwise independence $(H_0$ above) and higher-order conditional independence between multiple variables, have invariances that simplify the data representation to such a degree that the original values are discarded, retaining only the order relationships between the variable values. Any such test is therefore a rank test, i.e., it relies only on indices of the order statistic, not on the numerical values of the attributes. + +Our first theoretical contribution is the observation that any statistical test that focuses on measuring the (conditional) dependencies of attributes of endpoint nodes in $\mathcal{E}$ can be defined as a graph regression task over the feature hypergraph $\mathcal{F}_{\mathcal{E}}$ of Definition 3.1. + +Theorem 3.2. Given a multiset of attributes of the endpoint nodes $\mathcal{E}$ , the corresponding feature hypergraph $\mathcal{F}_{\mathcal{E}}$ (Definition 3.1) and a most-expressive hypergraph GNN encoder $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}})$ , then any test $T(\mathcal{E})$ that focuses on measuring the dependence of the attributes of the endpoint nodes of $\mathcal{E}$ has an equivalent function $h$ within the space of Multilayer Perceptrons (MLPs) that depends solely on the graph representation $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}})$ , i.e., $\exists h \in MLPs$ s.t. $T(\mathcal{E}) = h(M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}}))$ . + +Next we show that the hypergraph $\mathcal{F}_{\mathcal{E}}$ can be simplified with STAGE-edge-graph and that the ability to compute dependency measures can be made invariant to certain domain shifts between train and test. + +# 3.2. Transferability of STAGE + +The feature hypergraph $\mathcal{F}_{\mathcal{E}}$ in Definition 3.1 is used to obtain a maximal invariant graph representation via hypergraph GNN. This solution has a high computational cost from the use of hypergraph GNNs. Fortunately, we show that by assigning unique attribute identifiers to label the nodes of our STAGE-edge-graphs $G(S^{uv})$ (Definition 2.1), STAGE-edge-graphs are as informative as the corresponding feature hypergraphs, preserve the same invariances, while allowing the usage of (non-hypergraph) GNN encoders. + +Theorem 3.3. Given the attributes of the endpoint nodes $\mathcal{E}$ (Definition 3.1) of a graph $G = (V, E, X)$ , there exists an optimal parameterization $\theta_{g}^{*}, \theta_{s}^{*}$ for a most expressive GNN encoder $M^{g}$ and a most-expressive multiset encoder $M^{s}$ , respectively, such that $M_{\theta_{s}^{*}, \theta_{g}^{*}}(G) := M_{\theta_{s}^{*}}^{\mathrm{s}}\left(\left\{\left\{M_{\theta_{g}^{*}}^{g}(G(S^{uv}))(u,v) \in E\right\}\right\}\right)$ such that any test $T(\mathcal{E})$ that measures the dependence of $\mathcal{E}$ 's attributes of the endpoint nodes has an equivalent function $h$ within the space of Multilayer Perceptrons (MLPs) that depends solely on the graph representation $M_{\theta_{s}^{*}, \theta_{g}^{*}}(G)$ , i.e., $\exists h \in MLPs$ s.t. $T(\mathcal{E}) = h(M_{\theta_{s}^{*}, \theta_{g}^{*}}(G))$ . + +Theorem 3.3 motivates the design of STAGE, which lever + +ages a GNN on STAGE-edge-graphs to obtain edge-level embeddings. However, the use of unique attribute identifiers in the STAGE-edge-graphs disrupts invariance to permutations in attribute order (e.g., U.S. shoe size appearing as the first attribute in one dataset and U.K. shoe size as the last attribute in another), thereby limiting its domain transferability. More broadly, we now describe all the invariances we want for STAGE to have in order to be robust to a class of attribute domain shifts. + +COGG invariances. STAGE-edge-graphs facilitate domain transfer to distinct attribute domains. Intuitively, the full set of invariances required for domain transferability over $G = (V, E, X)$ consists of: (1) invariance or equivariance to transformations of attribute values that preserve the order statistic, (2) invariance or equivariance to permutations of attribute orders (columns of $X$ ), and (3) invariance or equivariance to permutations of nodes in the graph, affecting $V$ (and consequently $E$ ) and the rows of $X$ . These invariances are formalized in Definition B.5 in Appendix B.4 through the actions of component-wise order-preserving groupoid for graphs (COGG). Importantly, groups are insufficient to capture these invariances because they assume transformations act within a single attribute domain. However, we are interested in transformations across distinct attribute spaces. Groupoids generalize groups by allowing these transformations between different domains, making them the natural choice for modeling the required invariances. + +We now introduce our final theoretical contribution which establishes that STAGE achieves invariance to COGGs by design. This result shows that STAGE can provably achieve the zero-shot transferability to the class of attribute domain shifts defined by COGGs-type transformations. + +Theorem 3.4. STAGE is invariant to COGGs (Definition B.5). + +The proof sketch is as follows. From Theorem 3.3, STAGE achieves invariance to changes in attribute values, including their types, names, and semantics. Then, by dropping the attribute identifiers in STAGE-edge-graphs, we sacrifice maximal expressivity but ensure that STAGE is invariant to permutations of the attribute order. Finally, since STAGE employs a second GNN on the original input graph, using the embeddings of the STAGE-edge-graphs, while omitting the original node attributes, STAGE achieves invariance to node permutations. Thus, the method is invariant to COGGs. + +# 4. Experiments + +We demonstrate the effectiveness of STAGE across multiple experimental settings, focusing on small to medium-sized datasets. While the computational complexity scales linearly with the graph size and quadratically with the number of attributes, training on these datasets introduces only mod- + +![](images/d91f2afe9b449c01b40fc235ec7fb613bb536ffdb7cc83cdb1c5b5e39b2eaf47.jpg) +Figure 3: Zero-shot Hits@1 performance (higher is better) of STAGE and baselines, trained on four (or five) distinct E-Commerce Store domains and evaluated on the held-out domain (or H&M dataset). NBFNet-STAGE consistently achieves the highest zero-shot accuracy across all test domains, with up to $103\%$ improvement. + +erate computational overhead (e.g., $7.83\%$ slower than the fastest baseline in link prediction; see Appendix H). Therefore, STAGE is highly effective in these settings, achieving strong generalization performance. In the following, we present our main results and refer to Appendix D for details. Our code is available at https://github.com/snapstanford/stage-gnn/. + +Datasets. To evaluate zero-shot generalization to graphs with new attributes, we consider datasets with distinct domain-specific attributes but a shared task. Our datasets contain graphs with up to 4k nodes, 50k edges, 16 attributes, representing small to medium-size real-world scenarios where STAGE is particularly effective. Due to space constraints, we introduce them below and refer to Appendix C. + +E-Commerce Stores dataset (link prediction). We use data from a multi-category store (Kechinov, 2020) containing customer-product interactions (purchases, cart additions, views) over time. To simulate distinct single-category retailers, we partition the dataset into five domains, each representing a specialized store: shoes, refrigerators, desktops, smartphones, and beds. Each domain has its own customer base and product-specific attributes (e.g., smartphones have display type; shoes have ankle height). The task is to predict future customer-product interactions from past actions. + +H&M dataset (link prediction). We use the H&M Personalized Fashion Recommendations dataset (Kaggle, 2021), which contains transactions from a large fashion retailer, to evaluate the zero-shot performance of models trained on E-Commerce Stores. All attributes, except for "price", differ from those in E-Commerce Stores. The task remains to predict customer-product interactions from past actions. + +Social network datasets (node classification): Friendster and Pokec. We evaluate STAGE on two online social networks from different regions and user bases: Friendster (Teixeira et al., 2019) and Pokec (SNAP, 2012). Friendster nodes have attributes such as age, gender, interests, while Pokec nodes have public profile status, completion percentage, region, age, and gender. The task is to predict a node attribute common to both social networks using network structure and remaining node attributes. Since only age and gender are shared, we create two tasks: mask and predict gender (presented in this section), and mask and regress on age (discussed in Appendix E). + +Baselines. We compare STAGE to several baselines designed to handle new node attributes. (1) raw: Projects each raw node attribute into a fixed-dimensional space with a linear transformation, before summing across the projected dimensions. (2) gaussian: Use Gaussian noise as node attributes (Sato et al., 2021; Abboud et al., 2021). (3) structural: Ignores node attributes entirely, using only the graph structure. (4) lvm: Converts node attributes into textual descriptions and obtains embeddings using a pretrained encoder-only language model, taking only the node attributes as input (without graph structure) due to prompt length limitations, similar to PRODIGY (Huang et al., 2023). (5) normalized: Retains only continuous attributes and standardize them. For a fair comparison, all methods utilize the same underlying GNN architecture, NBFNet (Zhu et al., 2021c) for link prediction and GINE (Hu et al., 2020) for node classification. In Appendix F, we report additional experiments with other architectures. In addition to these baselines, we evaluate our approach against GraphAny (Zhao et al., 2024b), a recent method for domain transferability in node + +classification tasks, but not applicable to link prediction. + +# 4.1. Zero-Shot Link Prediction on Unseen Domains + +We evaluate the performance of all methods on zero-shot generalization on the E-Commerce Stores dataset, training on four categories, and testing on the held-out fifth category. + +Results. As shown in Figure 3, STAGE consistently outperforms all baselines in zero-shot Hits@1 across all test domains. Notable improvements include: $103\%$ gain when testing on the smartphone category (0.51 vs 0.25 Hits@1), $40\%$ on bed (0.44 vs 0.31), and $33\%$ on desktop (0.59 vs 0.44) compared to the strongest baselines. + +In Table 1, we report the average performance of each model, calculated by taking the results in which each domain is held out once and averaging the scores. Our evaluation also includes popular non-parametric link prediction approaches such as Common Neighbors, Adamic Adar, and Personalized PageRank, with results showing that STAGE substantially outperforms classical heuristic methods by $54\%$ , $51\%$ , and $3837\%$ respectively on Hits@1, while maintaining similar performance advantages on MRR. Overall, STAGE achieves $41\%$ higher average Hits@1 (0.46 vs 0.33) and $29\%$ higher MRR (0.50 vs 0.38) against the strongest baseline (normalized), with lower variance across seeds. This emphasizes the benefit of STAGE in transforming node attributes into a unified input space using learned edge embedding via STAGE-edge-graph, including its stronger attribute representation capabilities than LLM-based encoding approaches in the medium-sized graphs considered in this work. + +# 4.2. Cross-Dataset Zero-Shot Link Prediction + +We evaluate models trained on E-Commerce Stores for zero-shot prediction on the H&M dataset, which has distinct customers, products, activity patterns and attributes. + +Table 1 shows that the performance on H&M of STAGE when trained on E-Commerce Stores is virtually identical to its performance on the held-out category in E-Commerce Stores (0.46 vs. 0.46 Hits@1). This highlights the robustness of STAGE to domain shifts, as it maintains similar performance when transitioning from E-Commerce Stores, which primarily feature household items, electronics, and shoes, to H&M, which focuses on clothing with minimal overlap in product types. + +In Hits@1, STAGE achieves a relative improvement of $103\%$ over the best parametric baseline (llm) (0.46 vs. 0.23). Moreover, STAGE obtains a relative improvement of $202\%$ against a supervised structural method trained and tested on H&M (structural-supervised). In MRR, STAGE achieves the highest score, outperforming the best baseline by $99\%$ . + +Moreover, STAGE demonstrates a substantial improvement + +of $99\%$ in Hits@1 over Adamic Adar (0.466 vs 0.2349), which performs the best among traditional heuristic methods on the H&M dataset. Similarly, STAGE outperforms Adamic Adar by $48\%$ in MRR (0.4703 vs. 0.3184), further confirming the its superiority over classical link prediction heuristics in zero-shot scenarios. + +# 4.3. Zero-Shot Node Classification on Unseen Domains + +To validate our approach beyond link prediction and E-Commerce scenarios, we benchmark on a node classification task using two social network datasets, where the goal is to predict user gender. We train models on Friendster and evaluate zero-shot on Pokec. + +Table 2 shows that STAGE achieves a $10.3\%$ improvement over the best baseline (and lower variance), also surpassing the task-specific model GraphAny (Zhao et al., 2024b) and the cross-domain pretraining method GCOPE (Zhao et al., 2024a). This indicates that STAGE effectively captures attribute dependencies also in node classification tasks and outperforms all approaches by leveraging its unified input space obtained by the usage of the STAGE-edge-graphs. + +# 4.4. Generalization When Training on Multiple Domains + +We examine how the model performance varies with the number of training domains in E-Commerce Stores. + +As shown in Figure 4, STAGE obtains improving zero-shot performance (both Hits@1 and MRR) with more training domains. While not the only method showing improvement, STAGE exhibits notably tighter interquartile ranges compared to the only other method exhibiting better performance with increasing domain (gaussian) at higher domain counts. Additionally, STAGE's lower whiskers consistently rise with more domains, showing also that its worst-case scenarios improve with more training data. + +These results further validate that STAGE is capable of learning transferable patterns across domains through its defined unified input space. The consistent performance gains with additional training domains suggest that STAGE-edge-graph effectively captures generalizable dependencies between attributes, with more training domains enabling the learning of a broader range of dependencies. In contrast, baseline approaches that ignore attributes or use generic embeddings fail to leverage the additional training domains for improved cross-domain generalization. + +# 5. Related Work + +In this section, we present the most closely related works to our STAGE. A more in-depth comparison, along with additional related work, can be found in Appendix I. + +Table 1: NBFNet-STAGE outperforms all baselines in zero-shot Hits@1 and MRR (including supervised approaches) across the E-Commerce Stores and H&M datasets. For the E-Commerce Stores, results are averaged across models trained on all combinations of four graph domains and tested on the remaining domain. For zero-shot test on H&M, models are trained on the five E-Commerce Stores domains. % gain shows relative improvement of STAGE over each baseline. + +
Training: E-Commerce Stores +ModelTest: Held-out E-Comm. StoreTest: H&M Dataset
Hits@1 (↑)% gainMRR% gainHits@1 (↑)% gainMRR (↑)% gain
random0.0026 ± 0.000017615%--0.0006 ± 0.000077667%--
Common Neighbors0.2991 ± 0.000654%0.3942 ± 0.001426%0.2354 ± 0.000098%0.3179 ± 0.000048%
Adamic Adar0.3052 ± 0.000751%0.4001 ± 0.001524%0.2349 ± 0.000099%0.3184 ± 0.000048%
Personalized PageRank0.0117 ± 0.00003837%0.0714 ± 0.0001596%0.0105 ± 0.00004344%0.0717 ± 0.0000556%
NBFNet-raw0.0000 ± 0.00000.0032 ± 0.000915434%0.0005 ± 0.000493220%0.0059 ± 0.00117871%
NBFNet-gaussian0.2101 ± 0.0428119%0.2617 ± 0.045990%0.0925 ± 0.0708404%0.1176 ± 0.0756300%
NBFNet-structural0.3149 ± 0.025346%0.3721 ± 0.021934%0.2231 ± 0.0060109%0.2302 ± 0.0080104%
NBFNet-llm0.3226 ± 0.019043%0.3830 ± 0.014530%0.2302 ± 0.0015103%0.2365 ± 0.002199%
NBFNet-normalized0.3269 ± 0.021341%0.3844 ± 0.015929%0.2286 ± 0.0010104%0.2341 ± 0.0018101%
NBFNet-structural-supervisedN/AN/AN/AN/A0.1546 ± 0.0084202%0.2103 ± 0.0164124%
NBFNet-STAGE (Ours)0.4606 ± 0.01230%0.4971 ± 0.00730%0.4666 ± 0.00200%0.4703 ± 0.00290%
+ +![](images/9aeda62cb4f199dfecf9a725185d14ab04570a52dc45fecb9ceafe5101774e02.jpg) +(a) Zero-shot Hits@1 + +![](images/be4b2e48ee46e7fccbcd89de81c0e67fd6cd511795a70ad166fcd7465a071ac1.jpg) +(b) Zero-shot MRR +Figure 4: The performance (both Hits@1 and MRR) of STAGE improves with more train domains, while this is not the case for other methods. Box-plot distribution over all combinations of a fixed number of graph domains in the E-Commerce Stores dataset and testing on the held-out domain(s), averaged over random seeds. + +Graphs Generalization under Distribution Shifts. Several works address distribution shifts between train and test graphs over the same attribute domain, such as Shen et al. (2023); Zhu et al. (2021b), which employ learned augmentations to mitigate the change in distribution in test. Meanwhile, extensive research has focused on domain adaptation for GNNs (Dai et al., 2022; Li et al., 2020; Kong et al., 2022; Pei et al., 2020; Velicković et al., 2019; Wiles et al., 2022; Zhang et al., 2019; Zhu et al., 2021a), which typically assume access to data in both source and target domains. In contrast, our work tackles the more challenging scenario of zero-shot generalization to unseen attribute domains. To the best of our knowledge, all out-of-distribution graph methods (Zhang et al., 2024a) do not address the attribute domain shifts we consider, which include changes in the number of attributes between train and test. + +Foundation Models for Graphs. Developing foundation models for graph data is a growing research interest, aiming + +to create versatile graph models capable of generalizing across different graphs and tasks (Mao et al., 2024). Initial efforts in this direction convert attributed graphs into texts and apply an LLM (Liu et al., 2024; Chen et al., 2024b;a; Tang et al., 2024; Zhao et al., 2023; He & Hooi, 2024; Huang et al., 2023). However, while promising, this methodology risks information loss and may limit transferability (Collins et al., 2024; Gruver et al., 2024; Schwartz et al., 2024). In contrast, non-LLM approaches attempt to directly address domain transferability in the attribute space (Xia & Huang, 2024; Lachi et al., 2024; Zhao et al., 2024b; Frasca et al., 2024; Yu et al., 2024; Zhao et al., 2024a), or by avoiding the use of node attributes entirely (Gao et al., 2023; Lee et al., 2023; Galkin et al., 2024; Zhang et al., 2024b). We provide details in Appendix I about why some of these approaches are not applicable as our baselines. + +Table 2: Zero-shot test accuracy of STAGE and baselines on the Pokec dataset, trained on Friendster. % gain shows relative improvement of STAGE over each baseline. + +
ModelAccuracy (↑)% gain
random0.500 ± 0.000030.4%
GINE-raw0.558 ± 0.082916.8%
GINE-gaussian0.588 ± 0.025010.9%
GINE-structural0.564 ± 0.046615.6%
GINE-llm0.550 ± 0.036818.5%
GINE-normalized0.541 ± 0.014820.5%
GraphAny0.591 ± 0.008310.3%
GCOPE0.535 ± 0.015321.9%
GINE-STAGE (Ours)0.652 ± 0.00420%
+ +# 6. Conclusion and Future Work + +The challenge of learning universal graph representations that generalize across diverse attribute domains has limited progress in graph foundation models, mainly due to the lack of a unified input space to represent node attributes, which may vary in test graphs. In this paper, we proposed STAGE, which addresses this limitation by transforming diverse attribute spaces into a unified representation, learning statistical dependencies between attributes instead of relying on their absolute values. By demonstrating that these dependencies remain invariant under certain domain shifts, STAGE provides theoretical foundations for zero-shot generalization across graphs with differing attribute spaces. Our strong empirical results on medium-sized datasets demonstrate the practical effectiveness of this approach. + +While STAGE represents a meaningful step forward, it also highlights opportunities for future research. The unified input space we introduce could serve as a basis for developing graph foundation models that can learn from diverse graph datasets at scale, reducing the quadratic complexity of STAGE. However, realizing this potential will require addressing additional challenges, such as developing architectures to capture complex high-order attribute dependencies and scaling to large graph collections. + +# Acknowledgments + +BR and BB acknowledge support from the National Science Foundation (NSF) awards CCF-1918483, CAREER IIS-1943364 and CNS-2212160, an Amazon Research Award, and AnalytiXIN, Wabash Heartland Innovation Network (WHIN), Ford, NVidia, CISCO, and Amazon. Computing infrastructure was supported in part by CNS-1925001 (CloudBank). This work was supported in part by AMD under the AMD HPC Fund program. + +JL gratefully acknowledges the support of NSF under Nos. OAC-1835598 (CINES), CCF-1918940 (Expeditions), + +DMS-2327709 (IHBEM), IIS-2403318 (III); Stanford Data Applications Initiative, Wu Tsai Neurosciences Institute, Stanford Institute for Human-Centered AI, Chan Zuckerberg Initiative, Amazon, Genentech, GSK, Hitachi, SAP, and UCB. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Abboud, R., Ceylan, I. I., Grohe, M., and Lukasiewicz, T. The surprising power of graph neural networks with random node initialization. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021. +Bell, C. B. A characterization of multisample distribution-free statistics. Annals of Mathematical Statistics, 35(2): 735-738, 1964. doi: 10.1214/aoms/1177703564. +Berk, R. and Bickel, P. On invariance and almost invariance. Annals of Mathematical Statistics, 39(5):1573-1576, 1968. doi: 10.1214/aoms/1177698328. +Berk, R., Nogales, A., and Oyola, J. Some counterexamples concerning sufficiency and invariance. The Annals of Statistics, pp. 902-905, 1996. +Berk, R. H. A remark on almost invariance. The Annals of Mathematical Statistics, pp. 733-735, 1970. +Bevilacqua, B., Robinson, J., Leskovec, J., and Ribeiro, B. Holographic node representations: Pre-training task-agnostic node embeddings. In The Thirteenth International Conference on Learning Representations, 2025. +Chen, R., Zhao, T., Jaiswal, A. K., Shah, N., and Wang, Z. LLaGA: Large language and graph assistant. In *Forty-first International Conference on Machine Learning*, 2024a. +Chen, Z., Mao, H., Li, H., Jin, W., Wen, H., Wei, X., Wang, S., Yin, D., Fan, W., Liu, H., and Tang, J. Exploring the potential of large language models (llms) in learning on graphs. ACM SIGKDD Explorations Newsletter, 25(2): 42-61, 2024b. +Collins, K. M., Jiang, A. Q., Frieder, S., Wong, L., Zilka, M., Bhatt, U., Lukasiewicz, T., Wu, Y., Tenenbaum, J. B., Hart, W., et al. Evaluating language models for mathematics through interactions. Proceedings of the National Academy of Sciences, 121(24):e2318124121, 2024. + +Dai, Q., Wu, X.-M., Xiao, J., Shen, X., and Wang, D. Graph transfer learning via adversarial domain adaptation with graph convolution. IEEE Transactions on Knowledge and Data Engineering, 35(5):4908-4922, 2022. +David, H. and Nagaraja, H. Order statistics. Encyclopedia of Statistical Sciences, 2004. +Erhan, D., Bengio, Y., Courville, A., and Vincent, P. Visualizing higher-layer features of a deep network. Technical Report 1341, University of Montreal, June 2009. Also presented at the ICML 2009 Workshop on Learning Feature Hierarchies, Montr'el, Canada. +Fatemi, B., Halcrow, J., and Perozzi, B. Talk like a graph: Encoding graphs for large language models. In The Twelfth International Conference on Learning Representations, 2024. +Frasca, F., Jogl, F., Eliasof, M., Ostrovsky, M., Schonlieb, C.-B., Gartner, T., and Maron, H. Towards foundation models on graphs: An analysis on cross-dataset transfer of pretrained gnns. arXiv preprint arXiv:2412.17609, 2024. +Galkin, M., Yuan, X., Mostafa, H., Tang, J., and Zhu, Z. Towards foundation models for knowledge graph reasoning. In The Twelfth International Conference on Learning Representations, 2024. +Gao, J., Zhou, Y., Zhou, J., and Ribeiro, B. Double equivariance for inductive link prediction for both new nodes and new relation types. arXiv preprint arXiv:2302.01313, 2023. +Gruver, N., Finzi, M., Qiu, S., and Wilson, A. G. Large language models are zero-shot time series forecasters. Advances in Neural Information Processing Systems, 36, 2024. +He, Y. and Hooi, B. Unigraph: Learning a cross-domain graph foundation model from natural language. ArXiv, abs/2402.13630, 2024. +Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V., and Leskovec, J. Strategies for pre-training graph neural networks. In International Conference on Learning Representations, 2020. +Huang, Q., Ren, H., Chen, P., Kržmanc, G., Zeng, D., Liang, P. S., and Leskovec, J. Prodigy: Enabling in-context learning over graphs. Advances in Neural Information Processing Systems, 36, 2023. +Kaggle. H&m personalized fashion recommendations, 2021. URL https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/overview. Accessed: 2024-09-21. + +Kechinov, M. ecommerce behavior data from multi category store, 2020. URL www.kaggle.com/datasets/mkechinov/ecommerce-behavior-data-from-multi-category-store. +Kipf, T. and Welling, M. Semi-supervised classification with graph convolutional networks. ArXiv, abs/1609.02907, 2016. +Kong, K., Li, G., Ding, M., Wu, Z., Zhu, C., Ghanem, B., Taylor, G., and Goldstein, T. Flag: Adversarial data augmentation for graph neural networks. arXiv preprint arXiv:2010.09891, 2022. +Koning, N. W. and Hemerik, J. More efficient exact group invariance testing: using a representative subgroup. Biometrika, 111(2):441-458, 2024. +Lachi, D., Azabou, M., Arora, V., and Dyer, E. Graphfm: A scalable framework for multi-graph pretraining. arXiv preprint arXiv:2407.11907, 2024. +Larochelle, H., Erhan, D., and Bengio, Y. Zero-data learning of new tasks. In AAAI Conference on Artificial Intelligence, 2008. +Lee, J., Chung, C., and Whang, J. J. Ingram: Inductive knowledge graph embedding via relation graphs. In International Conference on Machine Learning, pp. 18796-18809. PMLR, 2023. +Lehmann, E. L., Romano, J. P., and Casella, G. Testing statistical hypotheses, volume 3. Springer, 3rd edition, 2005. +Leshno, M., Lin, V. Y., Pinkus, A., and Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861-867, 1993. ISSN 0893-6080. doi: https://doi.org/10.1016/S0893-6080(05)80131-5. +Li, Y., Ma, T., Zhang, J., Ding, Y., Chen, Y., and Tang, J. Progressive graph learning for open-set domain adaptation. In International Conference on Machine Learning, pp. 5968-5977. PMLR, 2020. +Liu, H., Feng, J., Kong, L., Liang, N., Tao, D., Chen, Y., and Zhang, M. One for all: Towards training one graph model for all classification tasks. In The Twelfth International Conference on Learning Representations, 2024. +Mao, H., Li, J., Shomer, H., Li, B., Fan, W., Ma, Y., Zhao, T., Shah, N., and Tang, J. Revisiting link prediction: A data perspective. arXiv preprint arXiv:2310.00793, 2023. +Mao, H., Chen, Z., Tang, W., Zhao, J., Ma, Y., Zhao, T., Shah, N., Galkin, M., and Tang, J. Graph foundation models. In arXiv preprint arXiv:2402.02216, 2024. + +Pei, H., Wei, B., Chang, K. C.-C., Lei, Y., and Yang, B. Domain adaptation on graphs by learning aligned graph bases. In Advances in Neural Information Processing Systems, volume 33, pp. 16735-16745, 2020. +Perozzi, B., Fatemi, B., Zelle, D., Tsitsulin, A., Kazemi, M., Al-Rfou, R., and Halcrow, J. Let your graph do the talking: Encoding structured data for llms. arXiv preprint arXiv:2402.05862, 2024. +Samuel, D. and Øvrelid, L. Tokenization with factorized subword encoding. In *Findings of the Association for Computational Linguistics: ACL 2023*, Toronto, Canada, 2023. Association for Computational Linguistics. +Sato, R., Yamada, M., and Kashima, H. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining, SDM, 2021. +Schwartz, E., Choshen, L., Shtok, J., Doveh, S., Karlinsky, L., and Arbelle, A. Numerologic: Number encoding for enhanced llms' numerical reasoning. arXiv preprint arXiv:2404.00459, 2024. +Shen, X., Sun, D., Pan, S., Zhou, X., and Yang, L. T. Neighbor contrastive learning on learnable graph augmentation. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pp. 9782-9791, 2023. +Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR, abs/1312.6034, 2013. +SNAP, S. N. A. P. Pokec social network dataset, 2012. URL https://snap.stanford.edu/data/soc-Pokec.html. Accessed: 2024-09-21. +Tang, J., Yang, Y., Wei, W., Shi, L., Su, L., Cheng, S., Yin, D., and Huang, C. Graphgpt: Graph instruction tuning for large language models. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 491-500, 2024. +Teixeira, L., Jalaian, B., and Ribeiro, B. Are graph neural networks miscalibrated? arXiv preprint arXiv:1905.02296, 2019. +Veličković, P., Fedus, W., Hamilton, W. L., Lio, P., Bengio, Y., and Hjelm, R. D. Deep graph infomax. In International Conference on Learning Representations, 2019. +Wang, T., Roberts, A., Hesslow, D., Scao, T. L., Chung, H. W., Beltagy, I., Launay, J., and Raffel, C. What language model architecture and pretraining objective works best for zero-shot generalization? In Proceedings of the + +39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 22964-22984. PMLR, 17-23 Jul 2022. +Wiles, O., Gowal, S., Stimberg, F., Rebuffi, S.-A., Ktena, I., Dvijotham, K., and Cemgil, A. T. A fine-grained analysis on distribution shift. In International Conference on Learning Representations (ICLR), 2022. +Xia, L. and Huang, C. Anygraph: Graph foundation model in the wild. arXiv preprint arXiv:2408.10700, 2024. +Xian, Y., Schiele, B., and Akata, Z. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4582-4591, 2017. +Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? ArXiv, abs/1810.00826, 2018. +Yu, X., Zhou, C., Fang, Y., and Zhang, X. Text-free multi-domain graph pre-training: Toward graph foundation models. arXiv preprint arXiv:2405.13934, 2024. +Zhang, K., Liu, S., Wang, S., Shi, W., Chen, C., Li, P., Li, S., Li, J., and Ding, K. A survey of deep graph learning under distribution shifts: from graph out-of-distribution generalization to adaptation. arXiv preprint arXiv:2410.19265, 2024a. +Zhang, Y., Song, G., Du, L., Yang, S., and Jin, Y. Dane: Domain adaptive network embedding. arXiv preprint arXiv:1906.00684, 2019. +Zhang, Y., Bevilacqua, B., Galkin, M., and Ribeiro, B. TRIX: A more expressive model for zero-shot domain transfer in knowledge graphs. In The Third Learning on Graphs Conference, 2024b. +Zhang, Z., Li, H., Zhang, Z., Qin, Y., Wang, X., and Zhu, W. Graph meets llms: Towards large graph models. In NeurIPS 2023 Workshop: New Frontiers in Graph Learning, 2023. +Zhao, H., Chen, A., Sun, X., Cheng, H., and Li, J. All in one and one for all: A simple yet effective method towards cross-domain graph pretraining. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 4443-4454, 2024a. +Zhao, J., Zhuo, L., Shen, Y., Qu, M., Liu, K., Bronstein, M., Zhu, Z., and Tang, J. Graphtext: Graph reasoning in text space. arXiv preprint arXiv:2310.01089, 2023. +Zhao, J., Mostafa, H., Galkin, M., Bronstein, M., Zhu, Z., and Tang, J. Graphany: A foundation model for node classification on any graph. ArXiv, abs/2405.20445, 2024b. + +Zhu, Q., Ponomareva, N., Han, J., and Perozzi, B. Shift-robust gnns: Overcoming the limitations of localized graph training data. In Advances in Neural Information Processing Systems (NeurIPS), volume 34, 2021a. +Zhu, Y., Xu, Y., Liu, Q., and Wu, S. An empirical study of graph contrastive learning. In Proceedings of the NeurIPS Track on Datasets and Benchmarks, 2021b. +Zhu, Z., Zhang, Z., Xhonneux, L.-P., and Tang, J. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 34:29476-29490, 2021c. + +# A. Pseudocode of STAGE Algorithm + +In this section, we present the detailed pseudocode for STAGE's two main components: (1) the STAGE-edge-graphs construction algorithm (Algorithm 1) that captures statistical dependencies between attributes, and (2) the forward pass (Algorithm 2) that uses these STAGE-edge-graphs to generate the final graph representation. The STAGE-edge-graphs construction creates a complete graph for each edge in the input graph, where nodes represent attributes and edge weights capture conditional probabilities between attribute pairs. The algorithm handles both totally ordered and unordered attributes. The forward pass then processes these STAGE-edge-graphs using two GNNs - one to generate edge embeddings from the STAGE-edge-graphs, and another to produce the final graph representation using these embeddings. + +Algorithm 1 STAGE-edge-graphs Construction +1: Input: Graph $G = (V,E,X)$ with node attributes $\mathbf{X} = \{\pmb{x}^v\}_{v\in V}$ +2: Output: STAGE-edge-graphs $\{G(S^{uv})\}_{(u,v)\in E}$ +3: for each $(u,v)\in E$ do +4: Initialize $S^{uv}\in \mathbb{R}^{2d\times 2d}$ with zeros +5: for $i = 1$ to $d$ do +6: if $x_{i}^{u}$ is totally ordered then +7: $S_{ii}^{uv}\gets \mathbb{P}(\mathrm{x}_i^A\leq \mathrm{x}_i^u)$ +8: else +9: $S_{ii}^{uv}\gets \mathbb{P}(\mathrm{x}_i^A = \mathrm{x}_i^u)$ +10: end if +11: if $x_{i}^{v}$ is totally ordered then +12: $S_{(i + d)(i + d)}^{uv}\gets \mathbb{P}(\mathrm{x}_i^B\leq \mathrm{x}_i^v)$ +13: else +14: $S_{(i + d)(i + d)}^{uv}\gets \mathbb{P}(\mathrm{x}_i^B = \mathrm{x}_i^v)$ +15: end if +16: end for +17: for $i = 1$ to $2d$ do +18: for $j = 1$ to $2d, j\neq i$ do +19: $(x_i,x_j)\gets \operatorname {GetAttributePair}(\pmb{x}^u,\pmb{x}^v,i,j,d)$ +20: if $x_{i},x_{j}$ are totally ordered then +21: $S_{ij}^{uv}\gets \mathbb{P}(\mathrm{x}_i\leq x_i|\mathrm{x}_j\leq x_j)$ +22: else if $x_{i}$ unordered, $x_{j}$ totally ordered then +23: $S_{ij}^{uv}\gets \mathbb{P}(\mathrm{x}_i = x_i|\mathrm{x}_j\leq x_j)$ +24: else if $x_{i}$ totally ordered, $x_{j}$ unordered then +25: $S_{ij}^{uv}\gets \mathbb{P}(\mathrm{x}_i\leq x_i|\mathrm{x}_j = x_j)$ +26: else +27: $S_{ij}^{uv}\gets \mathbb{P}(\mathrm{x}_i = x_i|\mathrm{x}_j = x_j)$ +28: end if +29: end for +30: end for +31: $G(S^{uv})\gets \operatorname{CreateCompleteGraph}(2d)$ +32: for $i = 1$ to $2d$ do +33: SetNodeAttribute $(G(S^{uv}),i,S_{ii}^{uv})$ +34: end for +35: for $i = 1$ to $2d$ do +36: for $j = 1$ to $2d, j\neq i$ do +37: SetEdgeAttribute $(G(S^{uv}),(i,j),S_{ij}^{uv})$ +38: end for +39: end for +40: end for + +Algorithm 2 STAGE Forward Pass +1: Input: Graph $G = (V,E,X)$ , GNNs $M_{1}$ and $M_{2}$ +2: Output: Graph representation $M(G)$ +3: $\{G(S^{uv})\}_{(u,v)\in E} \leftarrow$ ConstructEdgeGraphs $(G)$ +4: for each $(u,v) \in E$ do +5: $r^{uv} \gets M_1(G(S^{uv}))$ +6: end for +7: $G' \gets (V,E, \{r^{uv}\}_{(u,v)\in E})$ +8: $M(G) \gets M_2(G')$ +9: return $M(G)$ + +# B. Proofs and Additional Theoretical Results + +# B.1. Groupoids + +Definition B.1 (Groupoids). A groupoid $\mathcal{G}$ consists of the following elements: + +1. A collection of distinct spaces, denoted as Spaces $(\mathcal{G})$ +2. A set of transformations (also called morphisms) between these spaces, denoted as $\mathrm{Trans}(\mathcal{G})$ . +3. Each transformation $f \in \operatorname{Trans}(\mathcal{G})$ maps one space in $\operatorname{Spaces}(\mathcal{G})$ to another space (or potentially to itself), denoted as $f: X \to Y$ , where $X, Y \in \operatorname{Spaces}(\mathcal{G})$ . +4. There is a rule for combining transformations: for any two transformations $f: X \to Y$ and $g: Y \to Z$ , their composition results in a transformation $g \circ f: X \to Z$ . +5. Each space $S \in \operatorname{Spaces}(\mathcal{G})$ has an identity transformation $\mathrm{id}_S : S \to S$ that maps $S$ to itself, such that for any space $X \in \operatorname{Spaces}(\mathcal{G})$ and any transformation $f_1 : S \to X$ and $f_2 : X \to S$ , it guarantees $f_1 \circ \mathrm{id}_S = f_1$ and $\mathrm{id}_S \circ f_2 = f_2$ . +6. Every transformation $f: X \to Y$ has a unique inverse transformation $f^{-1}: Y \to X$ such that $f^{-1} \circ f = \mathrm{id}_X$ and $f \circ f^{-1} = \mathrm{id}_Y$ . + +# B.2. Statistical tests as graph regression on feature hypergraphs + +To prove the result of Theorem 3.2, we will first show an intermediate result using the notion of maximal invariants. Let $\mathcal{G}$ be a transformation group acting on a space $\mathbb{X}$ . A function $M:\mathbb{X}\to \mathbb{Z}$ is said to be maximal invariant if it is invariant to transformations of $\mathcal{G}$ and if $\forall x_{1},x_{2}\in \mathbb{X}$ , $M(x_{1}) = M(x_{2})$ implies $x_{2} = g\circ x_{1}$ for some group action $g\in \mathcal{G}$ , that is, if $M$ is constant on the orbits but for each orbit, it takes on a different value (Lehmann et al., 2005, pp. 214). A maximal invariant is a representation theory counterpart of sufficient statistics. + +Our intermediate result will show that the feature hypergraph admits a graph representation that is a maximal invariant. But first, we need to formally define the class of invariances, which we show later is essential for STAGE's domain transferability. Since we are interested in attribute spaces of distinct domains, rather than using groups (which involve automorphisms mapping a space onto itself), we will use groupoids (Definition B.1). Groupoids generalize the concept of groups by allowing transformations between multiple spaces. In a group, all transformations map a space onto itself, while in a groupoid, transformations can map between different spaces, but must still be invertible. + +Definition B.2 (Component-wise order-preserving groupoids for attributes (COGF)). Let $\mathbb{X}_1, \mathbb{X}_2$ be two attribute spaces, both with $d$ attribute dimensions. An attribute transformation $f: \mathbb{X}_1 \to \mathbb{X}_2$ is said to be component-wise order-preserving if it can be decomposed into a set of maps $f_1, \ldots, f_d$ , where each $f_i$ maps the $i$ -th dimension of $\mathbb{X}_1$ to the $i$ -th dimension of $\mathbb{X}_2$ and is a homomorphism that preserves the total order in $\mathbb{X}_1$ , and all dimensions of both $\mathbb{X}_1$ and $\mathbb{X}_2$ have a mapping. + +Given an attribute of the endpoint nodes $\mathcal{E} = \{\{(x^u, x^v) \mid (u, v) \in E\}\}$ and a groupoid action $f$ from the COGF (Definition B.2), we define how $f$ acts on $\mathcal{E}$ as follows: + +$$ +f (\mathcal {E}) = \left\{\left\{\left(f \left(\boldsymbol {x} ^ {u}\right), f \left(\boldsymbol {x} ^ {v}\right)\right) \mid (u, v) \in E \right\} \right\}. +$$ + +Now, we are ready to establish the intermediate result as follows. + +Lemma B.3. Given a multiset of attributes of the endpoint nodes $\mathcal{E}$ and the feature hypergraph $\mathcal{F}_{\mathcal{E}}$ (Definition 3.1). There exists a parameterization $\theta^{*}$ for a maximally expressive hypergraph GNN encoder $M$ such that $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}})$ is a maximal invariant under COGFs (Definition B.2). + +Proof. Let $\mathcal{V}(\mathcal{F}_{\mathcal{E}})$ be a set of labeled nodes (labeled with the feature id and the order statistic position) of $\mathcal{F}_{\mathcal{E}}$ , let $\mathcal{H}(\mathcal{F}_{\mathcal{E}})$ be a set of hyperedges of $\mathcal{F}_{\mathcal{E}}$ , and let $m(\mathcal{F}_{\mathcal{E}})$ be the number of entities from $\mathcal{E}$ , which are labeled with the entire graph during creation. Given two hypergraphs $\mathcal{F}_{\mathcal{E}_1}, \mathcal{F}_{\mathcal{E}_2}$ , we define $\mathcal{F}_{\mathcal{E}_1} = \mathcal{F}_{\mathcal{E}_2}$ if and only if $\mathcal{V}(\mathcal{F}_{\mathcal{E}_1}) = \mathcal{V}(\mathcal{F}_{\mathcal{E}_2})$ , $\mathcal{H}(\mathcal{F}_{\mathcal{E}_1}) = \mathcal{H}(\mathcal{F}_{\mathcal{E}_2})$ , and $m(\mathcal{F}_{\mathcal{E}}) = m(\mathcal{F}_{\mathcal{E}'})$ . Note that since the node in the feature hypergraph are always labeled, a most expressive hypergraph GNN $M_{\theta^*}$ will ensure that $M_{\theta^*}(\mathcal{F}_{\mathcal{E}_1}) = M_{\theta^*}(\mathcal{F}_{\mathcal{E}_2})$ if and only if $\mathcal{F}_{\mathcal{E}_1} = \mathcal{F}_{\mathcal{E}_2}$ . + +Let $\mathcal{G}$ be the COGF (Definition B.2) and let $f\in \mathcal{G}$ be an arbitrary groupoid action of COFG. To show invariance, the goal is to show that $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}}) = M_{\theta^{*}}(\mathcal{F}_{f(\mathcal{E})})$ for any $\mathcal{E}$ . Because $M_{\theta^{*}}$ is most expressive, this is equivalent to showing $\mathcal{F}_{\mathcal{E}} = \mathcal{F}_{f(\mathcal{E})}$ . + +Let $\mathcal{V}(\mathcal{F}_{\mathcal{E}}) = \{(i,k,l)\}_{i\in [d],k\in [m],l\in \{1,2\}}$ . We first observe that since $f$ acts on individual feature values, it does not change the total number of entities. Hence, the set of hypergraph nodes remain unchanged, $\mathcal{V}(\mathcal{F}_{f(\mathcal{E})}) = \mathcal{V}(\mathcal{F}_{\mathcal{E}}) = m$ . + +For the edges, consider an arbitrary hyperedge $H_{uv}$ in $\mathcal{F}_{\mathcal{E}}$ . Then, because $f$ is a COGF, it preserves the order statistics of all feature values. Thus, the order of the feature value $o_i(u)$ from $\mathcal{E}$ remains the same as $o_i'(u)$ from $f(\mathcal{E})$ , for all $i$ and $u$ . Hence, $H_{uv}$ is also a hyperedge in $\mathcal{F}_{f(\mathcal{E})}$ . Similarly, because $f$ has an inverse $f^{-1}$ , we can show that for every edge $H_{uv}'$ in $\mathcal{F}_{f(\mathcal{E})}$ , it is also in $\mathcal{F}_{\mathcal{E}}$ under the transformation $f^{-1}$ . Thus, $\mathcal{H}(\mathcal{F}_{\mathcal{E}}) = \mathcal{H}(\mathcal{F}_{f(\mathcal{E})})$ and so $\mathcal{F}_{\mathcal{E}} = \mathcal{F}_{f(\mathcal{E})}$ , and therefore $M_{\theta}(\mathcal{F}_{\mathcal{E}}) = M_{\theta}(\mathcal{F}_{f(\mathcal{E})})$ . + +To show maximality, let $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}}) = M_{\theta}(\mathcal{F}_{\mathcal{E}^{\prime}})$ for some $\mathcal{E}$ and $\mathcal{E}^{\prime}$ . Our goal is to show that $\mathcal{E}$ and $\mathcal{E}^{\prime}$ are on the same orbit, i.e. there exists a $f \in \mathcal{G}$ such that $f(\mathcal{E}) = \mathcal{E}^{\prime}$ . + +Because $M_{\theta^{*}}$ is most expressive, we know $\mathcal{F}_{\mathcal{E}} = \mathcal{F}_{\mathcal{E}'}$ . This implies that $\mathcal{V}(\mathcal{F}_{\mathcal{E}}) = \mathcal{V}(\mathcal{F}_{\mathcal{E}'})$ and $|\mathcal{H}(\mathcal{F}_{\mathcal{E}})| = |\mathcal{H}(\mathcal{F}_{\mathcal{E}'})|$ . First, Let $m = |\mathcal{V}(\mathcal{F}_{\mathcal{E}})| = |\mathcal{V}(\mathcal{F}_{\mathcal{E}'})|$ . And since $\mathcal{V}(\mathcal{F}_{\mathcal{E}}) = \mathcal{V}(\mathcal{F}_{\mathcal{E}'})$ , we also know both $\mathcal{E}$ and $\mathcal{E}'$ must have the same number of features. Denote it $d$ . In addition, because $\mathcal{H}(\mathcal{F}_{\mathcal{E}}) = \mathcal{H}(\mathcal{F}_{G'})$ , we have $|E| = |E'|$ . Second, pick any endpoint features $(\pmb{x}^u,\pmb{x}^v)\in \mathcal{E}$ , and let $H_{uv}\in \mathcal{H}(\mathcal{F}_{\mathcal{E}})$ be the corresponding hyperedge. We know that $H_{uv}\in \mathcal{H}(\mathcal{F}_{\mathcal{E}'})$ as well. Hence, there exists an endpoint feature $(\pmb{x}^{u'},\pmb{x}^{v'})\in \mathcal{E}'$ such that + +$$ +\forall 1 \leq i \leq d, o _ {i} (u) = o _ {i} ^ {\prime} \left(u ^ {\prime}\right) \text {a n d} o _ {i} (v) = o _ {i} ^ {\prime} \left(v ^ {\prime}\right), +$$ + +where $o_i(\cdot)$ is the order of values of $i$ -th feature in $\mathcal{F}_{\mathcal{E}}$ and $o_i'(\cdot)$ the order of values of $i$ -th feature in $\mathcal{F}_{\mathcal{E}'}$ . Thus, we can construct a COGF groupoid action $f$ as follows: + +Let $f$ be decomposed into a set of maps $f_{1},\ldots ,f_{d}$ for every feature dimension $i$ . Each $f_{i}$ is a piecewise linear function $f_{i}$ defined as follows: + +$$ +f _ {i} (a) = \left\{ \begin{array}{l l} a - (\boldsymbol {x}) _ {i (0)} + (\boldsymbol {x} ^ {\prime}) _ {i (0)} & \text {i f} a < (\boldsymbol {x}) _ {i (0)} \\ (\boldsymbol {x} ^ {\prime}) _ {i (k)} & \text {i f} a = \boldsymbol {x} _ {i (k)} \text {f o r s o m e} k: 1 \leq k \leq m _ {i} ^ {\prime} \\ (\boldsymbol {x} ^ {\prime}) _ {i (k _ {0})} + \frac {(\boldsymbol {x} ^ {\prime}) _ {i (k _ {1})} - (\boldsymbol {x} ^ {\prime}) _ {i (k _ {0})}}{(\boldsymbol {x}) _ {i (k _ {1})} - (\boldsymbol {x}) _ {i (k _ {0})}} (a - (\boldsymbol {x}) _ {i (k _ {0})}) & \text {i f} \boldsymbol {x} _ {i (k _ {0})} < a < \boldsymbol {x} _ {i (k _ {1})} \text {f o r s o m e} \\ & k _ {0}, k _ {1}: 1 \leq k _ {0} < k _ {1} \leq m _ {i} ^ {\prime} \\ a - (\boldsymbol {x}) _ {i (m _ {i} ^ {\prime})} + (\boldsymbol {x} ^ {\prime}) _ {i (m _ {i} ^ {\prime})} & \text {i f} a > (\boldsymbol {x}) _ {i (m _ {i} ^ {\prime})} \end{array} \right. +$$ + +Since each $f_{i}$ is a piecewise linear function that strictly increases, each of them preserves the order of feature values. And since $f$ can be decomposed into $f_{i}$ 's, $f$ is a COGF groupoid action. + +Hence, we have showed that there exists a $f$ such that $f(\mathcal{E}) = \mathcal{E}'$ which shows maximality. Hence completing the proof. + +![](images/06f55a03328411da01e3e34106ff714f33371dcc62351e64a43e67347ccad8d1.jpg) + +Based on Lemma B.3, we are ready to prove that measuring dependencies of the features $(\pmb{x}^u,\pmb{x}^v)\in \mathcal{E}$ under COGF invariances can be defined as depending only on a most-expressive GNN encoding of the feature hypergraph $\mathcal{F}_{\mathcal{E}}$ . In short, this is because any hypothesis test $T(\mathcal{E})$ that can be expressed as rank test is invariant to COGF, and any invariant function can necessarily be expressed as depending only on a maximal invariant. + +Theorem 3.2. Given a multiset of attributes of the endpoint nodes $\mathcal{E}$ , the corresponding feature hypergraph $\mathcal{F}_{\mathcal{E}}$ (Definition 3.1) and a most-expressive hypergraph GNN encoder $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}})$ , then any test $T(\mathcal{E})$ that focuses on measuring the dependence of the attributes of the endpoint nodes of $\mathcal{E}$ has an equivalent function $h$ within the space of Multilayer Perceptrons (MLPs) that depends solely on the graph representation $M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}})$ , i.e., $\exists h \in MLPs$ s.t. $T(\mathcal{E}) = h(M_{\theta^{*}}(\mathcal{F}_{\mathcal{E}}))$ . + +Proof. We first note that any test $T(\mathcal{E})$ that focuses on measuring the dependence or independence of endpoint features of $\mathcal{E}$ is necessarily a rank test that relies solely on the indices of the order statistics rather than the numerical values of the features (Bell, 1964; Berk & Bickel, 1968). As such, $T(\mathcal{E})$ is invariant to COGFs (Definition B.2). Now, we show that given $\mathcal{E}, \mathcal{F}_{\mathcal{E}}$ , and a most expressive hypergraph GNN encoder $M_{\theta^*}$ , there exists an $h$ such that $T(\mathcal{E}) = h(M_{\theta^*}(\mathcal{F}_{\mathcal{E}}))$ . + +For any $\mathcal{E}_1, \mathcal{E}_2$ , we know that if $M_{\theta^*}(\mathcal{F}_{\mathcal{E}_1}) = M_{\theta^*}(\mathcal{F}_{\mathcal{E}_2})$ , then $f(\mathcal{E}_1) = \mathcal{E}_2$ for some groupoid action $f$ in COGF (Lemma B.3). Then, because $T$ is invariant to $f$ , we have that $T(\mathcal{E}_1) = T(f(\mathcal{E}_1)) = T(\mathcal{E}_2)$ . Hence, each value of $M_{\theta^*}(\mathcal{F}_{\mathcal{E}})$ is associated with no more than one value of $T(\mathcal{E})$ . In other words, there exists a mapping $h^*$ such that $h^*(M_{\theta^*}(\mathcal{F}_{\mathcal{E}})) = T(\mathcal{E})$ . + +Since MLPs are universal function approximators (Leshno et al., 1993), there exists a MLP $h$ that approximates $h^*$ , i.e., $h(M_{\theta^*}(\mathcal{F}_{\mathcal{E}})) = T(\mathcal{E})$ . + +# B.3. Correspondence between STAGE-edge-graphs and feature hypergraphs + +For the proof of Theorem 3.3, we first prove an intermediate result, which establishes that there exists a bijective mapping between the feature hypergraph $\mathcal{F}_{\mathcal{E}}$ and the multiset of stage graphs, $\mathbb{S}_E \coloneqq \{\{G(S^{uv}) \mid (u,v) \in E\}\}$ , where each STAGE-edge-graph is equipped with unique feature ids. In the case of repeated feature values, we will show a bijective mapping to a collapsed feature hypergraph, where the nodes corresponding to the repeated feature values are collapsed into one single node, with its order $k$ being the smallest order of these repeated values. We denote by $n_i'$ the number of unique feature values of feature $i$ . We note that such a collapsed feature hypergraph in the case of repeated feature values will provide a representation that is stabiler than the traditional rank tests, as repeated values will translate into uncertainty or noise in the rank test results, whereas our feature hypergraph representation will remain stable. + +Lemma B.4. There exists a bijective mapping $\mathcal{I}$ between the multiset of STAGE-edge-graphs $\mathbb{S}_E\coloneqq \{\{\pmb {G}(\pmb {S}^{uv})\mid (u,v)\in E\} \}$ with unique feature ids and the feature hypergraph $\mathcal{F}_{\mathcal{E}}$ (Definition 3.1). + +Proof. Let $G = (V, E, \mathbf{X})$ be an input graph and let $\mathcal{E} = \{\{(x^u, x^v) \mid (u, v) \in E\}\}$ be the corresponding multiset of attributes of the endpoint nodes. We assume that each stage graph $G(S^{uv}) \in \mathbb{S}_E$ has nodes labeled as follows: the node associated with $i$ -th feature for the source node $u$ is labeled with $(i, 1)$ , and the node associated with $i$ -th feature for the target node $v$ is labeled with $(i, 2)$ , for every feature $i \in [d]$ . Thus, given the graph $G(S^{uv})$ , we can recover weighted adjacency matrix $S^{uv}$ , and so there is a one-to-one mapping between them. Hence, for the following discussion, we refer to $G(S^{uv})$ and $S^{uv}$ interchangeably. + +We first show that, given $\mathbb{S}_E$ , we can construct $\mathcal{F}_{\mathcal{E}}$ . + +Construct $\mathcal{I}:\mathcal{I}(\mathbb{S}_E) = \mathcal{F}_{\varepsilon}$ + +We first construct the set of feature hypergraph nodes. For every feature $i$ , collect the multiset $Q_{i1} = \{\{S_{ii}^{uv}\}_{(u,v) \in E}$ and $Q_{i2} = \left\{\left\{S_{(i + d)(i + d)}^{uv}\right\} \right\}_{(u,v) \in E}$ and let $Q_{i} = Q_{i1} \cup Q_{i2}$ . In words, $Q_{i1}$ collects the i-th feature's empirical c.d.f., $S_{ii}^{uv} = p(x_i^u) = \mathbb{P}(\mathrm{x}_i \leq x_i^u)$ , of the source node $u$ of all edges. Similarly, $Q_{i2}$ collects the i-th feature's empirical c.d.f., $S_{(i + d)(i + d)}^{uv} = p(x_i^v) = \mathbb{P}(\mathrm{x}_i \leq x_i^v)$ , of the target node $v$ of all edges. Note that $Q_{i}$ is a multiset, so if there are multiple nodes $u$ (or $v$ ) with the same i-th feature value $x_i^u$ (or $x_i^v$ ), they will have the same empirical c.d.f. $p(x_i^u)$ (or $p(x_i^v)$ ), and thus $Q_{i}$ will record the multiplicity (number of occurrence) of such repeated c.d.f. values. + +Sort the unique values in the multiset $Q_{i}$ in ascending order and denote the sorted sequence of unique values as $S_{i} = (s_{1}, s_{2}, \ldots, s_{m_{i}^{\prime}})$ where $s_{l} \in Q_{i}$ for each $l \in [m_{i}^{\prime}]$ where $m_{i}^{\prime} \leq m$ is the total number of unique values for feature $i$ (if all values have multiplicity of 1, then $m_{i}^{\prime} = m$ ). Denote $n_{i}(s_{l})$ the multiplicity of the value $s_{l}$ in the multiset $Q_{i}$ . Then, we can recover the feature hypergraph's nodes corresponding to the $i$ -th feature as follows: + +- For the smallest feature value, Construct the two nodes labeled $(i,1,1)$ and $(i,1,2)$ . +- For $l \in \{2, 3, \dots, m_i'\}$ and $s_l \in S_i$ , construct the two nodes labeled as $(i, l - 1 + n_i(s_{l-1}), 1)$ and $(i, l - 1 + n_i(s_{l-1}), 2)$ . $l - 1 + n_i(s_{l-1})$ is the order of the feature value $s_l$ , accounting for multiplicity. + +Repeating the above process for all features $i$ will recover the node set of the feature hypergraph. + +Then, we reconstruct the multiset of hyperedges. Take any $S^{uv} \in \mathbb{S}_E$ . Again, for every feature $i$ , we have $S_{ii}^{uv} = p(x_i^u)$ denoting the empirical c.d.f. of the i-th feature of the source node $u$ . Let $N_{iu} = \{(i,k_{i,1},1),\ldots ,(i,k_{i,m_i'},1)\}$ , where the $k_{i,l}$ 's are the orders ( $l \in [m_i']$ ). $N_{iu}$ then is the subset of hypernodes for feature $i$ associated with the source node $u$ in the original edge in the input graph. Now, let $l'$ be the smallest integer in $[m_i']$ such that $k_{i,l'} > S_{ii}^{uv} = p(x_i^u)$ , and let $k_i^u = k_{i,l'} - 1$ . Then, $k_i^u$ is the order of the i-th feature of node $u$ , i.e., $k_i^u = o_i(u)$ . + +Similarly, for every feature $i$ , we have $\pmb{S}_{(i + d)(i + d)}^{uv} = p(x_i^v)$ , the empirical marginal c.d.f. when node $v$ is the target node of an edge. Let $N_{iv} = \{(i,k_{i,1},2),\ldots ,(i,k_{i,m_i'},2)\}$ . Let $l''$ be the smallest integer in $[m_i']$ such that $k_{l''} > S_{(i + d)(i + d)}^{uv} = p(x_i^v)$ . Then, let $k_{i}^{v} = k_{l''} - 1$ , and this is the order of the i-th feature of node $v$ , i.e., $k_{i}^{v} = o_{i}(v)$ . + +Hence, we have recovered the hyperedge: + +$$ +H _ {u v} = \{(i, k _ {i} ^ {u}, 1) \} _ {i \in [ d ]} \cup \{(i, k _ {i} ^ {v}, 2) \} _ {i \in [ d ]}. +$$ + +where $k_{i}^{u}$ and $k_{i}^{v}$ are defined as above. + +Repeat the above process for every $S^{uv} \in \mathbb{S}_E$ , then we recover the entire multiset of hyperedges for the feature hypergraph. + +Construct $\mathcal{I}^{-1}:\mathcal{I}^{-1}(\mathcal{F}_{\mathcal{E}}) = \mathbb{S}_{E}$ + +Given a feature hypergraph $\mathcal{F}_{\mathcal{E}}$ with $\nu(\mathcal{F}_{\mathcal{E}})$ the set of nodes and $\mathcal{H}(\mathcal{F}_{\mathcal{E}})$ the multiset of hyperedges. Our goal is to reconstruct the multiset of STAGE-edge-graphs $\mathbb{S}_E = \{\{G(S^{uv}) \mid (u, v) \in E\}\}$ for some underlying edge set $E$ . + +Pick any hyperedge $H = \{(i, k_i^1, 1)\}_{i \in [d]} \cup \{(i, k_i^2, 2)\}_{i \in [d]} \in \mathcal{H}(\mathcal{F}_{\mathcal{E}})$ , where $k_i^1 = o_i(u)$ is the order of i-th feature value for some unknown node $u$ and $k_i^2 = o_i(v)$ the order of i-th feature value for some unknown node $v$ . We first construct the corresponding STAGE-edge-graph adjacency matrix, which we denote $S^H$ . Once $S^H$ is obtained, then we have the STAGE-edge-graph $G(S^H)$ . + +First, we construct the diagonal entries of $S^H$ as follows. Note that the entire hypergraph is labeled with an integer $m$ , which indicate the total number of entities (nodes) in the original input graph. Hence, we can recover the marginal empirical c.d.f. of the $i$ -th feature value of each entity. Specifically, for every feature $i$ , we have $k_i^1$ from the hyperedge $H$ , denoting the order of $i$ -th feature value of the underlying source node $u$ of an edge in the original input graph. If there is another hypergraph node $(i, k', 1) \in \mathcal{V}(\mathcal{F}_{\mathcal{E}})$ such that $k' > k_i^1$ , then let $n_i^1 = k' - 1$ . Otherwise, let $n_i^1 = m$ . Thus, $n_i^1$ indicates the total number of nodes in the original input graph that have the $i$ -th feature values smaller than or equal to the $i$ -th feature value of the current node $u$ . Note that $n_i^1$ accounts for multiplicity, if there were multiple nodes having the same $i$ -th feature value as this node. Hence, let $S_{ii}^H = n_i^1 / m$ , which is equal to the marginal empirical c.d.f. of the $i$ -th feature value of node $u$ . + +Similarly, for every feature $i$ we have $k_i^2$ . If there is another hypergraph node $(i,k',2) \in \mathcal{V}(\mathcal{F}_{\mathcal{E}})$ such that $k' > k_i^2$ , then let $n_i^2 = k' - 1$ . Otherwise, let $n_i^2 = m$ . Let $S_{(i+d)(i+d)}^H = n_i^2 / m$ . Hence, we have filled in the diagonal entries of $S^H$ . + +Second, we construct the off-diagonal entries of $S^H$ . Recall that the off-diagonal entries of STAGE-edge-graph weighted adjacency matrices denote the empirical conditional probabilities between two different features (Equation (2)), either within the same source node, the same target node, or between the source and target node. Specifically, for any two features $i,j\in [d]$ , $i\neq j$ , the entry is + +$$ +\boldsymbol {S} _ {i j} ^ {H} = \mathbb {P} _ {\mathrm {A} \sim \mathrm {U n i f} (V)} \left(\mathrm {x} _ {i} ^ {\mathrm {A}} \leq x _ {i} ^ {u} \mid \mathrm {x} _ {j} ^ {\mathrm {A}} \leq x _ {j} ^ {u}\right) +$$ + +$$ +\boldsymbol {S} _ {i (j + d)} ^ {H} = \mathbb {P} _ {(\mathrm {A}, \mathrm {B}) \sim \operatorname {U n i f} (E)} \left(\mathrm {x} _ {i} ^ {\mathrm {A}} \leq x _ {i} ^ {u} \mid \mathrm {x} _ {j} ^ {\mathrm {B}} \leq x _ {j} ^ {v}\right) +$$ + +$$ +\boldsymbol {S} _ {(i + d) j} ^ {H} = \mathbb {P} _ {(\mathrm {A}, \mathrm {B}) \sim \operatorname {U n i f} (E)} \left(\mathrm {x} _ {i} ^ {\mathrm {B}} \leq x _ {i} ^ {v} \mid \mathrm {x} _ {j} ^ {\mathrm {A}} \leq x _ {j} ^ {u}\right) +$$ + +$$ +\boldsymbol {S} _ {(i + d) (j + d)} ^ {H} = \mathbb {P} _ {\mathrm {B} \sim \operatorname {U n i f} (V)} \left(\mathrm {x} _ {i} ^ {\mathrm {B}} \leq x _ {i} ^ {v} \mid \mathrm {x} _ {j} ^ {\mathrm {B}} \leq x _ {j} ^ {v}\right) +$$ + +where $(u,v)$ is the edge in the input graph corresponding to the hyperedge $H$ . + +We can compute these entries of $S^H$ as follows. First, given any hyperedge $H' \in \mathcal{V}(\mathcal{F}_{\mathcal{E}})$ , denote $K_{H'}^d(i)$ for any $i \in [d]$ and $r \in \{1,2\}$ such that $(i, K_{H'}^d(i), r) \in H'$ . Then, regarding our particular hyperedge $H$ of interest, for every pair of features $i, j \in [d]$ with $i \neq j$ , we can obtain $n_i^1, n_j^1, n_i^2$ , and $n_j^2$ as defined previously. Recall that $n_i^d$ is the number of feature values of $i$ -th feature that are smaller than or equal to the current $i$ -th feature value captured by $H$ , for both the source node $(r = 1)$ or the target node $(r = 2)$ . + +For the entry $S_{ij}^{H}$ and $S_{(i + d)(j + d)}^{H}$ , they capture inner-node feature dependencies, and we notice that the empirical conditional probabilities are defined w.r.t. random nodes sampled uniformly from the set of all nodes. Hence, we can compute these two entries as follows: + +$$ +\boldsymbol {S} _ {i j} ^ {H} = \min \left\{1, n _ {i} ^ {1} / n _ {j} ^ {1} \right\} +$$ + +$$ +\boldsymbol {S} _ {(i + d) (j + d)} ^ {H} = \min \{1, n _ {i} ^ {2} / n _ {j} ^ {2} \}. +$$ + +To compute the entries for $S_{i(j + d)}^{H}$ and $S_{(i + d)j}^{H}$ , we note that the random nodes A, B are uniformly sampled from the set of edges $E$ . To do so, we first define the two subsets of hyperedges $\mathcal{H}_j^1$ and $\mathcal{H}_j^2$ as follows: + +$$ +\mathcal {H} _ {j} ^ {1} := \left\{K _ {H ^ {\prime}} ^ {- 1} (j) \leq n _ {j} ^ {1} \mid H ^ {\prime} \in \mathcal {H} \left(\mathcal {F} _ {\mathcal {E}}\right) \right\} +$$ + +$$ +\mathcal {H} _ {j} ^ {2} := \left\{K _ {H ^ {\prime}} ^ {2} (j) \leq n _ {j} ^ {2} \mid H ^ {\prime} \in \mathcal {H} \left(\mathcal {F} _ {\mathcal {E}}\right) \right\}. +$$ + +In other words, $\mathcal{H}_j^1$ is the subset of hyperedges whose node, $(i,K_{H'}^1 (j),1)$ , has an order $K_{H'}^1 (j)$ that is smaller than or equal to the order of the counterpart node of the current hyperedge $H$ . Vice versa for $\mathcal{H}_j^2$ . Hence, we have + +$$ +| \mathcal {H} _ {j} ^ {1} | / | \mathcal {H} (\mathcal {F} _ {\mathcal {E}}) | = \mathbb {P} _ {(\mathrm {A}, \mathrm {B}) \in \operatorname {U n i f} (E)} \left(x _ {j} ^ {\mathrm {A}} \leq x _ {j} ^ {u}\right) +$$ + +$$ +\left| \mathcal {H} _ {j} ^ {2} \right| / \left| \mathcal {H} \left(\mathcal {F} _ {\mathcal {E}}\right) \right| = \mathbb {P} _ {(\mathrm {A}, \mathrm {B}) \in \operatorname {U n i f} (E)} \left(\mathrm {x} _ {j} ^ {\mathrm {B}} \leq x _ {j} ^ {v}\right). +$$ + +Then, we define the next two subsets $\mathcal{H}_{i|j}^{1|2}$ and $\mathcal{H}_{i|j}^{2|1}$ as follows: + +$$ +\mathcal {H} _ {i | j} ^ {1 | 2} := \left\{K _ {H ^ {\prime}} ^ {1} (i) \leq n _ {i} ^ {1} \mid H ^ {\prime} \in \mathcal {H} _ {j} ^ {2} \right\} +$$ + +$$ +\mathcal {H} _ {i | j} ^ {2 | 1} := \left\{K _ {H ^ {\prime}} ^ {2} (i) \leq n _ {i} ^ {2} \mid H ^ {\prime} \in \mathcal {H} _ {j} ^ {1} \right\} +$$ + +These two subsets help us effectively computes the empirical conditional probabilities. Namely, now we have + +$$ +| \mathcal {H} _ {i | j} ^ {1 | 2} | / | \mathcal {H} _ {j} ^ {2} | = \mathbb {P} _ {(\mathrm {A}, \mathrm {B}) \in \operatorname {U n i f} (E)} \left(\mathrm {x} _ {i} ^ {\mathrm {A}} \leq x _ {i} ^ {u} \mid \mathrm {x} _ {j} ^ {\mathrm {B}} \leq x _ {j} ^ {v}\right) +$$ + +$$ +| \mathcal {H} _ {i | j} ^ {2 | 1} | / | \mathcal {H} _ {j} ^ {1} | = \mathbb {P} _ {(\mathrm {A}, \mathrm {B}) \in \operatorname {U n i f} (E)} \left(\mathrm {x} _ {i} ^ {\mathrm {B}} \leq x _ {i} ^ {v} \mid \mathrm {x} _ {j} ^ {\mathrm {A}} \leq x _ {j} ^ {u}\right) +$$ + +Thus, we set the adjacency matrix entries for inter-node dependencies to + +$$ +\boldsymbol {S} _ {i (j + d)} ^ {H} = | \mathcal {H} _ {i | j} ^ {1 | 2} | / | \mathcal {H} _ {j} ^ {2} | +$$ + +$$ +\boldsymbol {S} _ {(i + d) j} ^ {H} = | \mathcal {H} _ {i | j} ^ {2 | 1} | / | \mathcal {H} _ {j} ^ {2} | +$$ + +Now that we have constructed a mapping $\mathcal{I}$ mapping $\mathbb{S}_E$ to $\mathcal{F}_{\mathcal{E}}$ , and another mapping $\mathcal{I}^{-1}$ mapping $\mathcal{F}_{\mathcal{E}}$ to $\mathbb{S}_E$ , we now want to check that they are valid bijections. To show this, we show that $\mathcal{I}^{-1} \circ \mathcal{I} = \text{Identity}$ , and $\mathcal{I} \circ \mathcal{I}^{-1} = \text{Identity}$ . + +# Show that $\mathcal{I}^{-1}\circ \mathcal{I} = \mathbf{I}$ dentity + +Let $\mathbb{S}_E$ be an arbitrary multiset of STAGE-edge-graphs. Let $\mathcal{F}' = \mathcal{I}(\mathbb{S}_E)$ and $\mathbb{S}'' = \mathcal{I}^{-1}(\mathcal{F}') = \mathcal{I}^{-1}(\mathcal{I}(\mathbb{S}_E))$ . First, we observe that the mapping $\mathcal{I}$ transforms each element $G(S^{uv}) \in \mathbb{S}_E$ to one hyperedge $H' \in F'$ . Similarly, the mapping $\mathcal{I}^{-1}$ transforms each hyperedge $H' \in F'$ to one STAGE-edge-graph $G'' \in \mathbb{S}''$ . Hence, as long as we show that, for any $G(S^{uv}) \in \mathbb{S}_E$ , the composed transformation $\mathcal{I}^{-1} \circ \mathcal{I}$ produces a STAGE-edge-graph $G''$ such that $G(S^{uv}) = G''$ , we can conclude $\mathcal{I}^{-1} \circ \mathcal{I} = \text{Identity}$ . + +To observe this, we first note that $\pmb{G}''$ has the same set of labeled nodes with $\pmb{G}$ , and that each node $(i,r), i \in [d], r \in \{1,2\}$ has the same empirical marginal c.d.f. values. Similarly, between any two nodes $(i_1,r_1)$ and $(i_2,r_2)$ , $\pmb{G}$ and $\pmb{G}''$ will have the same edge attribute for the edge $((i_1,r_1),(i_2,r_2))$ , which corresponds to the empirical conditional probabilities between features $i_1$ and $i_2$ and between node placement in the original edge (source or target) $r_1$ and $r_2$ . Thus, $\pmb{G} = \pmb{G}''$ . + +# Show that $\mathcal{I} \circ \mathcal{I}^{-1} = \mathbf{Identity}$ + +Let $\mathcal{F}_{\mathcal{E}}$ be an arbitrary feature hypergraph. Let $\mathbb{S}' = \mathcal{I}^{-1}(\mathcal{F}_{\mathcal{E}})$ and $\mathcal{F}'' = \mathcal{I}(\mathbb{S}')$ . Similarly, as long as we show that, for any hyperedge $H \in \mathcal{F}_{\mathcal{E}}$ , the composed transformation $\mathcal{I} \circ \mathcal{I}^{-1}$ produces a hypergraph $H''$ such that $H = H''$ , we can conclude that $\mathcal{I} \circ \mathcal{I}^{-1} = \text{Identity}$ . + +To observe this, we note that every hyperedge $(i,k,r)\in H$ , where $i\in [d],1\leq k\leq m_i',r\in \{1,2\}$ , will be recovered in $H''$ . This is because each $(i,k,r)\in H$ corresponds to a unique labeled node $(i,r)$ in the STAGE-edge-graph $\mathbb{G}'$ , which will be used to construct a node $(i,k'',r)$ in $H''$ under the mapping $\mathcal{I}$ . In terms of the order $k$ , the mapping $\mathcal{I}^{-1}$ will convert it into the marginal empirical c.d.f. value, which is treated as the attribute of node $(i,r)$ in the STAGE-edge-graph $\mathbb{G}'$ . The mapping $\mathcal{I}$ , on the other hand, will convert this marginal empirical c.d.f. value into the order $k''$ for the node $(i,k'',r)$ in $H''$ , guaranteeing $k'' = k$ . Thus, every node $(i,k,r)$ that is in $H$ is also in $H''$ , and there will be no additional nodes created for $H''$ . Hence, $H = H''$ for every hyperedge $H\in \mathcal{F}_{\mathcal{E}}$ , and thus $\mathcal{I}\circ \mathcal{I}^{-1} = \mathrm{Identity}$ . + +In conclusion, we have shown two mappings, $\mathcal{I}$ and $\mathcal{I}^{-1}$ , and have shown that they are the inverse transformation of each other. Hence, $\mathcal{I}$ is a bijective mapping between the multiset of STAGE-edge-graphs and feature hypergraph. + +Given the bijective mapping in Lemma B.4 between the multiset of STAGE-edge-graphs with unique feature identifiers and the feature hypergraph, and the fact that the feature hypergraph allows for a maximal invariant graph representation (Lemma B.3), it follows that the set of STAGE-edge-graphs can also yield a maximal invariant representation of the original input graph. This observation is formalized as below, which is our second theoretical contribution: + +Theorem 3.3. Given the attributes of the endpoint nodes $\mathcal{E}$ (Definition 3.1) of a graph $G = (V, E, X)$ , there exists an optimal parameterization $\theta_{g}^{*}, \theta_{s}^{*}$ for a most expressive GNN encoder $M^{g}$ and a most-expressive multiset encoder $M^{s}$ , respectively, such that $M_{\theta_{s}^{*}, \theta_{g}^{*}}(G) := M_{\theta_{s}^{*}}^{s}\left(\left\{\left\{M_{\theta_{g}^{*}}^{g}(\boldsymbol{G}(\boldsymbol{S}^{uv})): (u, v) \in E\right\}\right\}\right)$ such that any test $T(\mathcal{E})$ that measures the dependence of $\mathcal{E}$ 's attributes of the endpoint nodes has an equivalent function $h$ within the space of Multilayer Perceptrons (MLPs) that depends solely on the graph representation $M_{\theta_{s}^{*}, \theta_{g}^{*}}(G)$ , i.e., $\exists h \in MLPs$ s.t. $T(\mathcal{E}) = h(M_{\theta_{s}^{*}, \theta_{g}^{*}}(G))$ . + +Proof. To show invariance, let $G_{1} = (V,E,\mathbf{X}_{2})$ and $G_{2} = (V,E,\mathbf{X}_{2})$ be two graphs such that $f(\mathbf{X}_1) = \mathbf{X}_2$ for some groupoid action $f$ in the COGF. Let $\mathcal{E}_1$ and $\mathcal{E}_2$ be the corresponding attributes of the endpoint nodes respectively, from which we have $f(\mathcal{E}_1) = \mathcal{E}_2$ . Let $\mathbb{S}_{1E} = \{\{G(S_1^{uv})\mid (u,v)\in E\} \}$ and $\mathbb{S}_{2E} = \{\{G(S_2^{uv})\mid (u,v)\in E\} \}$ be the corresponding STAGE-edge-graphs respectively. + +Since $f(\mathcal{E}_1) = \mathcal{E}_2$ , and the attribute hypergraph is invariant to COGF (shown in the proof for Lemma B.3), we have $\mathcal{F}_{\mathcal{E}_1} = \mathcal{F}_{\mathcal{E}_2}$ . And since there is a one-to-one mapping between the multiset of STAGE-edge-graphs and the feature hypergraph, we have $\mathbb{S}_{1E} = \mathbb{S}_{2E}$ . Hence, + +$$ +\begin{array}{l} \left\{\left\{M _ {\theta_ {g} ^ {*}} \left(\boldsymbol {S} _ {1} ^ {u v}\right) \mid (u, v) \in E \right\} \right\} = \left\{\left\{M _ {\theta_ {g} ^ {*}} (\boldsymbol {S}) \right\} \right\} _ {\boldsymbol {S} \in \mathbb {S} _ {1 E}} \\ = \left\{\left\{M _ {\theta_ {g} ^ {*}} (\boldsymbol {S}) \right\} \right\} _ {\boldsymbol {S} \in \mathbb {S} _ {2 E}} = \left\{\left\{M _ {\theta_ {g} ^ {*}} (\boldsymbol {S} _ {2} ^ {u v}) \mid (u, v) \in E \right\} \right\}. \\ \end{array} +$$ + +As a result, + +$$ +\begin{array}{l} M _ {\theta_ {s} ^ {*}, \theta_ {g} ^ {*}} (G _ {1}) = M _ {\theta_ {s} ^ {*}} ^ {s} (\left\{\left\{M _ {\theta_ {g} ^ {*}} ^ {g} (\boldsymbol {G} (\boldsymbol {S} _ {1} ^ {u v})) \mid (u, v) \in E \right\} \right\}) \\ = M _ {\theta_ {s} ^ {*}} ^ {s} \left(\left\{\left\{M _ {\theta_ {g} ^ {*}} ^ {g} \left(\boldsymbol {G} \left(\boldsymbol {S} _ {2} ^ {u v}\right)\right) \mid (u, v) \in E \right\} \right\}\right) = M _ {\theta_ {s} ^ {*}, \theta_ {g} ^ {*}} \left(G _ {2}\right). \\ \end{array} +$$ + +To show maximality, Let $G_{1}$ and $G_{2}$ be two graphs such that $M_{\theta_{s}^{*},\theta_{g}^{*}}(G_{1}) = M_{\theta_{s}^{*},\theta_{g}^{*}}(G_{2})$ . Then, because $M_{\theta_{s}^{*}}^{s}$ is a most expressive multiset encoder, we have that + +$$ +M _ {\theta_ {s} ^ {*}} ^ {s} (\left\{\left\{M _ {\theta_ {g} ^ {*}} ^ {g} (\boldsymbol {G} (\boldsymbol {S} _ {1} ^ {u v})) \mid (u, v) \in E \right\} \right\}) = M _ {\theta_ {s} ^ {*}} ^ {s} (\left\{\left\{M _ {\theta_ {g} ^ {*}} ^ {g} (\boldsymbol {G} (\boldsymbol {S} _ {2} ^ {u v})) \mid (u, v) \in E \right\} \right\}). +$$ + +Again, since $M_{\theta_g^*}^g$ is a most expressive GNN, we have + +$$ +\mathcal {S} _ {1 E} = \{\{\boldsymbol {G} (\boldsymbol {S} _ {1} ^ {u v}) \mid (u, v) \in E \} \} = \{\{\boldsymbol {G} (\boldsymbol {S} _ {2} ^ {u v}) \mid (u, v) \in E \} \} = \mathcal {S} _ {2 E}. +$$ + +This implies that the feature hypergraphs $\mathcal{F}_{\mathcal{E}_1}$ and $\mathcal{F}_{\mathcal{E}_2}$ are the same, $\mathcal{F}_{\mathcal{E}_1} = \mathcal{F}_{\mathcal{E}_2}$ due to the bijective mapping between multisets of STAGE-edge-graphs and feature hypergraphs. And as has been shown in the proof of Lemma B.3, this implies there exists a groupoid action $f$ in COGF such that $f(\mathcal{E}_1) = \mathcal{E}_2$ . Hence, we have shown that $M_{\theta_s^*,\theta_g^*}(G)$ is a maximal invariant representation w.r.t. COGF. + +Thus, similar to the proof of Theorem 3.2, there exists a MLP $h$ such that for any test $T(\mathcal{E})$ , we have + +$$ +T (\mathcal {E}) = h (M _ {\theta_ {s} ^ {*}, \theta_ {g} ^ {*}} (G)). +$$ + +# B.4. COGG Invariances + +Definition B.5 (Component-wise order-preserving groupoid for graphs (COGG)). Denote $\mathbb{X}$ the space of node attributes with $d\geq 1$ dimensions, and $\mathbb{G}(\mathbb{X})$ the space of attributed graphs with attribute space $\mathbb{X}$ and $m\geq 2$ entities. A graph transformation $g:\mathbb{G}(\mathbb{X}_1)\to \mathbb{G}(\mathbb{X}_2)$ of two attribute spaces $\mathbb{X}_1$ and $\mathbb{X}_2$ is said to be a groupoid action of the component-wise order-preserving groupoid for graphs if it can be decomposed into a permutation of node identities $g_{\mathrm{node}}:V\to V$ and a transformation of node attributes $g_{\mathrm{attribute}}:\mathbb{X}_1\to \mathbb{X}_2$ satisfying the following. Given $G_{1} = (V,E_{1},X_{1})\in \mathbb{G}(\mathbb{X}_{1})$ and $G_{2} = (V,E_{2},X_{2})\in \mathbb{G}(\mathbb{X}_{2})$ with $g(G_{1}) = G_{2}$ , + +- $\forall u, v \in V, (u, v) \in E_1 \iff (g_{\mathrm{node}}(u), g_{\mathrm{node}}(v)) \in E_2$ . +- $g_{\text{attribute}}$ is a COGF (Definition B.2) except for any $i \in [d]$ , the $i$ -th component $g_{\text{attribute},i}$ may map the $i$ -th dimension of $\mathbb{X}_1$ to a different dimension of $\mathbb{X}_2$ , while maintaining a one-to-one correspondence between all dimensions of $\mathbb{X}_1$ and $\mathbb{X}_2$ . + +# B.5. STAGE as a COGG Invariant Representation + +Theorem 3.4. STAGE is invariant to COGGs (Definition B.5). + +Proof. Given a graph $G = (V, E, \mathbf{X})$ , a STAGE model $M$ applies two instances of equivariant GNNs, an intra-edge GNN and an inter-edge one, to process the input graph. Denote the intra-edge GNN $M_1$ and the inter-edge GNN $M_2$ . The intra-edge GNN $M_1$ is applied onto $\mathbb{S}_E := \{\{G(\mathbf{S}^{uv}) \mid (u, v) \in E\}\}$ , the set of STAGE-edge-graphs, to produce edge-level embeddings: + +$$ +\boldsymbol {r} ^ {u v} = M _ {1} (\boldsymbol {G} (\boldsymbol {S} ^ {u v})), \forall (u, v) \in E +$$ + +and the inter-edge GNN $M_2$ takes the edge-level embeddings as the edge attributes onto the original graph, i.e., making a $G' = (V, E, \{r_{(u,v) \in E}^{uv}\})$ to produce a final graph representation: + +$$ +M (G) = M _ {2} \left(G ^ {\prime}\right) = M _ {2} \left(\left(V, E, \left\{\left\{r ^ {u v} \right\} \right\} _ {(u, v) \in E}\right)\right) +$$ + +Now, consider a train graph $G_{\mathrm{tr}} = (V_{\mathrm{tr}}, E_{\mathrm{tr}}, \mathbf{X}_{\mathrm{tr}})$ with $\mathcal{E}_{\mathrm{tr}}$ and a test graph $G_{\mathrm{te}} = (V_{\mathrm{te}}, E_{\mathrm{te}}, \mathbf{X}_{\mathrm{te}})$ such that there exists a groupoid action $g$ in the COGG (Definition B.5) satisfying $g(G_{\mathrm{tr}}) = G_{\mathrm{te}}$ . As per Definition B.5, $g$ is composed of a node identity permutation $g_{\mathrm{node}}$ and a attribute transformation $g_{\mathrm{attribute}}$ . + +We first note that the multiset $\{\{r^{uv}\}\}_{(u,v)\in E}$ is invariant to node identity permutation $g_{\mathrm{node}}$ because a multiset is invariant to the permutation of its elements. Since the inter-edge GNN $M_2$ is an equivariant GNN, we have that + +$$ +M \left(g _ {\text {n o d e}} \left(G _ {\text {t r}}\right)\right) = M \left(\left(g _ {\text {n o d e}} \left(V _ {\text {t r}}\right), g _ {\text {n o d e}} \left(E _ {\text {t r}}\right), \left\{\left\{r ^ {u v} \right\} \right\} _ {(u, v) \in g _ {\text {n o d e}} \left(E _ {\text {t r}}\right)}\right)\right) = M \left(G _ {\text {t r}}\right). +$$ + +Hence, as long as we can show that the graph representation given by $M$ is also invariant under $g_{\mathrm{attribute}}$ , then we together we can show that $M$ is invariant to our groupoid action $g$ , and that $M(G_{\mathrm{tr}}) = M(G_{\mathrm{te}})$ . + +To proceed, we first note that the groupoid action $g_{\mathrm{attribute}}$ , when applied to an attributed graph $G$ , can be expressed as $g_{\mathrm{attribute}}(G) = (V, E, g_{\mathrm{attribute}}(\mathbf{X}))$ , because the attribute transformation only acts on the node attributes but leaves the graph structure unchanged. Hence, when applying the inner-edge GNN $M_1$ to the multiset of STAGE-edge-graphs of a transformed input graph $g_{\mathrm{attribute}}(G)$ , we write $M_1(g_{\mathrm{attribute}}(\mathbf{G}(\mathbf{S}^{uv})))$ , for all $(u, v) \in E$ . + +Now, all we need to show is that the intra-edge GNN $M_{1}$ produces a multiset of STAGE-edge-graph representations that is invariant under the attribute transformation $g_{\mathrm{attribute}}$ , i.e., $\{\{M_1(\pmb {G}(\pmb {S}^{uv}))\} \}_{(u,v)\in E} = \{\{M_1(g_{\mathrm{attribute}}(\pmb {G}(\pmb {S}^{uv})))\} \}_{(u,v)\in E}$ . Since $g_{\mathrm{attribute}}$ is COGF (Definition B.2) except it may map different training attribute dimensions of $\mathbf{X}_{\mathrm{tr}}$ to different attribute dimensions of $\mathbf{X}_{\mathrm{te}}$ , we can therefore further decompose it into two different components: $h$ and $f$ with $g = h\circ f$ , where $h$ is a mapping that permutes attribute dimensions, and $f$ is a COGF. + +In Theorem 3.3, we have shown that a most expressive GNN applied to a STAGE-edge-graph $G(S^{uv})$ equipped with attribute ids (which are the nodes ids in the STAGE-edge-graph because nodes correspond to attribute dimensions) produces maximal invariant representation under COGF. Hence, this implies that the intra-edge GNN $M_{1}$ , when applied to each STAGE-edge-graph, without unique node ids, will produce an invariant representation to the COGF $f$ . Namely, for all $(u,v)\in E$ , + +$$ +M _ {1} \left(f \left(\boldsymbol {G} \left(\boldsymbol {S} ^ {u v}\right)\right)\right) = M _ {1} \left(\boldsymbol {G} \left(f \left(\boldsymbol {S} ^ {u v}\right)\right)\right) = M _ {1} \left(\boldsymbol {G} \left(\boldsymbol {S} ^ {u v}\right)\right). +$$ + +Note that $f(\mathbf{G}(\mathbf{S}^{uv})) = \mathbf{G}(f(\mathbf{S}^{uv}))$ because $f$ acts on the node and edge attributes in $\mathbf{G}(\mathbf{S}^{uv})$ (which are derived from the attribute values), but preserve the graph structure. + +On the other hand, once the node ids in STAGE-edge-graph $G(S^{uv})$ is dropped, because $M_{1}$ is an equivariant GNN, we also have that the $M_{1}$ 's output representations are invariant to permutations of the attribute dimensions, which corresponds to the permutations of node ids in the STAGE-edge-graph. Namely, for all $(u,v)\in E$ , + +$$ +M _ {1} (h (\boldsymbol {G} (\boldsymbol {S} ^ {u v}))) = M _ {1} (\boldsymbol {G} (\boldsymbol {S} ^ {u v})). +$$ + +Hence, together we have that for any $(u,v)\in E$ + +$$ +M _ {1} \left(g _ {\text {a t t r i b u t e}} \left(\boldsymbol {G} \left(\boldsymbol {S} ^ {u v}\right)\right)\right) = M _ {1} \left(h \circ f \left(\boldsymbol {G} \left(\boldsymbol {S} ^ {u v}\right)\right)\right) = M _ {1} \left(h \left(\boldsymbol {G} \left(\boldsymbol {S} ^ {u v}\right)\right)\right) = M _ {1} \left(\boldsymbol {G} \left(\boldsymbol {S} ^ {u v}\right)\right), +$$ + +Thus completing the proof. + +# C. Datasets + +Here we describe how we construct the E-Commerce Category Dataset, the H&M Dataset, and the Social Network Datasets (Friendster and Pokec). + +# C.1. E-Commerce Category Dataset + +To test the model's generalization to new input attribute spaces, we consider a dataset of E-Commerce users and products (Kechinov, 2020). There are 29,228,809 different product categories, such as smartphones, shoes, and computers. We select a subset of the most popular product categories and form an input graph from the products under each category and their respective connected users. At test time, we hold out an entirely different graph containing unseen products, from new unseen categories and associated users, and test the zero-shot (i.e., frozen model) performance on the test data. In this dataset, we focus on the single task of predicting links between users and products, with links indicating a user purchasing/viewing/carting/uncarting a product. + +However, all categories originally share the same attributes. To ensure that the graph domains we build have different attribute types, we use GPT-4 to retrieve information specific to each category. Specifically, the information retrieval process involves prompting GPT-4 with the following content: + +```txt +"According to the following information regarding an E-Commerce purchase, give information about the product in the following asked format." "First, the product is purchased at time: " + row["event_time"] + "." +"Second, the category of the product is " + row["category_code"] + "." +"Third, the brand of the product is " + row["brand"] + "." +"Last, the price of the product is " + str(row["price")] + "." +"Please provide information about the product in the following json format." +"\{jsonprototype\}" +``` + +The JSON prototype is different for different categories, and contains attributes that are specific for the category being prompted. That is, the JSON prototype for smartphones contains, for instance, attributes like display type, which is not a attribute for shoes, containing instead attributes such as ankle height. In the following, we report the JSON prototype for all categories. + +# bed + +```txt +{ "type":, "bed_frame INCLUDED":, "footboard_included": , "box_spring_required": , + "ram_gb": , + "storage_type_hdd_size_gb": , + "storage_type_ssd_size_gb": , + "storage_type Hybrid_size_gb": , + "graphics_card": , + "power_supply_watts": , + "cooling_system": +``` + +# refrigerators + +```txt +"energyrating": , "defrost_type": , "has_water_dispenser": , "is_energy_efficient": , "display_size":, "display_resolution":, "processor_type":, +``` + +```txt +"ram": , "storage_options": , "rear_camera_primary_resolution": , "front_camera_resolution": , "operating_system": , "has_nfc": +} +``` + +After extracting attributes of different numbers for all categories, we also append the original two shared attributes of all products (price, brand) that are considered to have a different distribution across categories, forming the following dataset statistics. Since the customer nodes lack attributes, we build edges between attributed nodes of the same type (e.g., products) based on common connections, forming STAGE-edge-graph for these new edges. These edges are provided to all baselines. + +Table 3: Statistics of E-Commerce Categories + +
CategoryNumber of NodesNumber of EdgesAverage DegreeNum attributes
bed4044257886.3813
desktop30113745012.4412
refrigerators29853352011.2313
smartphone3391319709.4314
shoes40325489013.6216
+ +# C.2. H&M Dataset + +H&M has 106K products, sharing the same 25 attributes, and 1.37M customers, sharing the same 7 attributes. We sampled the interaction between the most popular 830 products and 830 customers based on their node degrees. We discarded 14 product attributes since 12 of them are repetitive (e.g. perceived Colour_value_id is just a one-to-one mapping of perceived Colour_value_name), 1 of them is the detail_desc an English sentence that connects the other attributes, and 1 + +Table 4: Comparison of statistics between the Pokec and Friendster social network datasets after filtering and sampling nodes. + +
StatisticsPokecFriendster
Number of nodes2831392
Number of edges20843322
Number of node attributes45
Attributespublic, completion percentage,region, ageage, interest, occupation,music, tv
Average degree7.3639575971731452.3864942528735633
Minimum degree11
Maximum degree2912
Lowest degrees[1, 2, 3, 4, 5][1, 2, 3, 4, 5]
# Nodes with lowest degrees[35, 31, 25, 21, 14][516, 404, 213, 113, 62]
Label 0 Ratio50.88%46.84%
Label 1 Ratio49.12%53.16%
+ +of them is the article_id serving as the identifier of each product. We also discarded 4 user attributes: customer_id as the identifier, FN and Active due to too many missing values (65% and 66% respectively), and postal_code that is overdispersed. + +After picking the largest connected component of the graph formed by the 830 products and 830 users, we construct this dataset to have 77080 edges, 1580 nodes with an average degree 48.78, and 11 attributes for each product node and 3 attributes for each user node. The product attributes are: product_type_name, product_group_name, graphical_appearance_name, colour_group_name, perceived Colour_value_name, perceived Colour/master_name, department_name, index_name, index_group_name, section_name, garment_group_name. The user attributes are: club_member_status, fashion_news_frequency, age. + +# C.3. Social Network Datasets (Friendster and Pokec) + +The original Pokec social network dataset contains 1632803 nodes and 30622564 edges and each node has 58 attributes. However, 54 of them are difficult to encode either because they are random texts input by the user or because there is no straightforward way to turn the attributes into totally ordered ones. We first filtered out the nodes that contain invalid attributes and then sample the most popular 150 female and male nodes each before picking the largest connected components of the graph formed by the popular nodes. + +The original Friendster social network dataset contains 43880 nodes and 145407 edges and each node has 644 attributes. However, most of the attributes are binary, which is inefficient for STAGE to encode (i.e. will need $644^{*}2$ nodes in each STAGE-edge-graphs). We find out the attributes are in the format of a meta attribute (e.g. occupation) followed by a more detailed attribute (e.g. writer). Therefore, we turned the binary attributes that share the same meta attribute into a multicategorical attribute. We then filtered out the nodes that have only one active binary attribute under each meta attribute (otherwise the multi-category does not make sense) and pick the largest connected components of the graph formed by the these nodes. + +In the end, the statistics of Pokec and Friendster datasets are available in Table 4. + +# D. Experiment Details + +For Figure 3, Table 1, and Figure 4 We use the default NBFNet-PyG configuration for the inductive WN18RR dataset (Zhu et al., 2021c), except for a few specific parameters. The input dimension for the node attribute is set to 256, and the model includes six hidden layers with dimensions [256, 256, 256, 256, 256, 256], making a total of seven layers. For STAGE, we use 1 layer of GINEConv (Hu et al., 2020) for the GNN on STAGE-edge-graph, which produces an edge representation of dimension 256. We also append an extra p_value to each edge in the STAGE-edge-graph for expressivity. All model are trained with a batch size of 32 over 30 epochs. + +For Figure 3, Figure 4, and the E-Commerce columns of Table 1 we average over seeds 0, 1, 2. For the H&M columns of Table 1, we average over seeds 1024, 1025, 1026. + +For Table 2, we average over seeds 32, 33, and 34 using the following configuration. The input attribute dimension is set to 64, with 128 as the dimension of hidden channels. The model uses 2 layers of GINEConv (Hu et al., 2020). The learning + +Table 5: Zero-shot test Mean-Square Loss (lower is better) of STAGE and baselines on the Pokec dataset with regression tasks on predicting the user's age. Models were trained on the same sample of the Friendster dataset in Section 4. All models show the same bad performance on doing this very challenging task because the root mean squared error (RMSE) of constantly predicting the mean of all age values is 10.7. We use the same configurations as Table 2. N/A means the model does not support node regression tasks. + +
ModelRMSE (↓)
GINE-structural10.99 ± 0.000
GINE-gaussian10.99 ± 0.000
GINE-normalized10.99 ± 0.000
GINE-llm10.99 ± 0.000
GINE-age10.99 ± 0.000
GraphyAnyN/A
GCOVEN/A
GINE-STAGE (Ours)10.99 ± 0.000
+ +rate for the optimizer was set to 0.0001, with a dropout rate of 0.5 to mitigate overfitting. Training was carried out for 400 epochs. Additionally, STAGE is deployed with 2 layers of GNN on STAGE-edge-graph with GINEConv and an edge representation of dimension 32. For GraphAny, we adopt the default configuration as preliminary experiments indicated that modifying hyperparameters yielded no significant performance improvements. + +# E. Age Regression Experiment Results + +Table 5 shows that the zero-shot regression on age across different social networks is a challenging task, particularly when the age distributions of the datasets are drastically different. GraphAny and GCOPE are not included because they are designed for and only supports node classification tasks. Figure 5 shows that the age distribution in the Pokec dataset is skewed towards younger users, with notable frequencies for ages such as 0 (invalid data), 18, and 20, while ages above 42 are scarcely represented. In contrast, the Friendster dataset contains a much broader range of ages, including significant numbers of users aged in their mid-twenties, such as 25, with smaller frequencies for users up to age 91. This disparity in distribution—where Pokec's frequencies are centered around younger users and Friendster's are more spread across the adult age spectrum—poses a substantial difficulty for models attempting to generalize across the two networks. + +# F. Ablation Study + +In this section, we provide ablation studies to further investigate the effectiveness and versatility of STAGE. Experiments in Appendix F.1 complement the main results in the paper by exploring whether STAGE is effective on alternative GNN backbones and configurations. Experiments in Appendix F.2 then study if STAGE can outperform a model trained on the common attributes shared between train and test domain, validating whether STAGE truly leverages dependencies among unseen attributes at test time to make predictions. + +# F.1. Evaluating STAGE with GCN as the backbone GNN + +In the main experiments, we employed GINE + NBFNet for link prediction and GINE + GINE for node classification as the backbone GNN configurations. A natural question arises: Can STAGE be effective when using other backbone GNN architectures? To address this, we propose GCN-STAGE (GINE + GCN (Kipf & Welling, 2016)), where we replace the second GINE with a modified GCN to perform message passing on the original graph. We choose GCN as it is a well-known baseline for node classification tasks. We modified GCN to process edge attributes by applying an MLP layer to edge attributes before incorporating them into the edge messages. The first GINE model operating on STAGE-edge-graphs remained unchanged. + +Table 6 presents the results, which demonstrate that GCN-STAGE outperforms all baseline methods in terms of average accuracy. Comparing to the other GCN-backbone models, GCN-STAGE outperforms with a $7.33\%$ relative improvement, and achieves an order-of-magnitude smaller standard deviation, showcasing the stability and consistency of predictions across random seeds. Furthermore, same as GINE-STAGE, GCN-STAGE also outperforms GraphAny (Zhao et al., 2024b), demonstrating that STAGE is effective on both GCN and GINE. We note that, however, the gain observed with GCN-STAGE + +![](images/272ac7c9991a9b3c609cbea30929cdf5d91231ef75b57dc4ad3478c213175086.jpg) + +![](images/f7a2d4e99f0246571b710b4cac22e7d2e1756744e2d02457a2f5473fd0b54d50.jpg) +Figure 5: Comparison of Age Distributions in Pokec and Friendster Datasets. The top histogram shows the age distribution for the Pokec dataset, where a significant number of users have an age of 0, followed by a noticeable peak around the age of 20. The bottom histogram illustrates the age distribution for the Friendster dataset, with a strong concentration of users around the age of 25, and a smaller presence of older individuals. + +is slightly lower than that of GINE-STAGE as shown in Table 2. This is not surprising, as GCN has been shown to have lesser expressivity than GINE (Xu et al., 2018). + +These results demonstrate the effectiveness of STAGE regardless of the backbone GNN architecture (GINE or GCN), reinforcing the versatility and general applicability of STAGE across tasks and architectures, further solidifying its strength as a robust framework. + +# F.2. Comparison with models trained on common attributes + +In the second ablation study, we aim to investigate whether STAGE is truly leveraging dependencies among multiple unseen node attributes to make zero-shot predictions on the test domain, rather than simply relying on the common attributes shared between train and test. In particular, the attribute "price" and "brand" are shared between the E-commerce datasets (Appendix C.1), and the attribute "age" is shared between Friendster and Pokec (Appendix C.3). Hence, we compare STAGE to a model with the same backbone GNN trained to utilize the shared attribute to make predictions. We name these models NBFNet-price on E-commerce datasets for link prediction, and GINE-age on Friendster and Pokec for node classification. We do not experiment with training on the "brand" attribute because its values are distinct (or the distribution have different supports) in different product categories. + +Table 6: Zero-shot test accuracy (higher is better) of STAGE and baselines on the Pokec dataset. Models were trained on a sample of the Friendster dataset. GCN-STAGE demonstrates the best zero-shot test accuracy, surpassing all other methods in both average accuracy and stability. + +
ModelAccuracy (↑)
GCN-structural0.547 ± 0.0658
GCN-gaussian0.567 ± 0.0382
GCN-normalized0.570 ± 0.0315
GCN-llm0.526 ± 0.0300
GraphAny0.591 ± 0.0083
GCOPE0.535 ± 0.0153
GCN-STAGE (Ours)0.593 ± 0.0046
+ +Table 7: Zero-shot Hits@1 and MRR of NBFNet-STAGE and NBFNet-price on the E-Commerce dataset. Models are trained on all combinations of four graph domains and tested on the remaining domain. NBFNet-STAGE significantly outperforms NBFNet-price, demonstrating that STAGE effectively utilizes more information than common attribute (price) shared between attribute domains. + +
ModelHits@1 (↑)MRR (↑)
NBFNet-price0.2713 ± 0.02800.3263 ± 0.0301
NBFNet-STAGE (Ours)0.4606 ± 0.01230.4971 ± 0.0073
+ +Tables 7 and 8 show the results of this ablation study. NBFNet-STAGE outperforms NBFNet-price with a relative improvement of $69.8\%$ and GINE-STAGE outperforms GINE-age with a relative improvement of $12.0\%$ . These results corroborates our statement that STAGE is capable of leveraging complex dependencies among multiple attributes to make predictions, even when said attributes are unseen during training, as STAGE significantly outperforms the models relying only on shared attributes. + +# G. Interpreting STAGE + +In Section 4, we demonstrated STAGE has a strong performance when zero-shooting to unknown attribute domains. A natural question arises: how does STAGE recognize unseen attributes during zero-shot testing, and which attributes are most relevant for making predictions? To address this, we conduct a qualitative analysis of STAGE's behavior using saliency maps (Erhan et al., 2009; Simonyan et al., 2013). This method computes the gradients of the model's outputs with respect to the input data, quantifying how much each input influences the model's prediction. + +We train NBFNet-STAGE on all 5 categories of the E-Commerce dataset. We then perform zero-shot inference on the H&M dataset, obtaining triplet scores for missing edges. Let $G = (V,E,X)$ be the input graph of the H&M dataset with $d$ attributes, $\mathcal{T}$ be the set of ground-truth triplets in H&M, and $M(t), t \in \mathcal{T}$ be the model's output triplet score. Recall that STAGE transforms raw attribute values into edge attributes $S_{f_1f_2}^{uv}, S_{(f_1 + d)f_2}^{uv}, S_{f_1(f_2 + d)}^{uv}, S_{(f_2 + d)(f_1 + d)}^{uv}$ for each pair of attributes $f_1, f_2$ and every edge $(u,v) \in E$ . We compute the saliency of attribute pairs, $E_G, T(f_1, f_2)$ , as follows: + +$$ +\begin{array}{l} E _ {G, \mathcal {T}} \left(f _ {1}, f _ {2}\right) := \sum_ {t \in \mathcal {T}} \sum_ {(u, v) \in E} \left| \frac {\partial M (t)}{\partial \boldsymbol {S} _ {f _ {1} f _ {2}} ^ {u v}} + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {f _ {1} (f _ {2} + d)} ^ {u v}} + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {(f _ {2} + d) (f _ {1} + d)} ^ {u v}} + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {i j} ^ {(f _ {1} + d) (f _ {2} + d)}} \right. \\ + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {f _ {2} f _ {1}} ^ {u v}} + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {f _ {2} (f _ {1} + d)} ^ {u v}} + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {f _ {2} (f _ {1} + d)} ^ {u v}} + \frac {\partial M (t)}{\partial \boldsymbol {S} _ {(f _ {2} + d) (f _ {1} + d)} ^ {u v}} \Bigg |. \\ \end{array} +$$ + +Thus, $E_{G,\mathcal{T}}(f_1,f_2)$ indicates how the pair of attributes $f_{1},f_{2}$ jointly influence the model's output predictions. + +One of the most striking aspects of STAGE is its ability to recognize relevant attribute dependencies zero-stho at inference time (i.e., without requiring fine-tuning). To investigate this phenomenon, we examine the saliency values of every pair of product attributes $f_{1}, f_{2}$ in the H&M dataset during zero-shot inference. + +Figure 6 presents a heatmap of these normalized saliency values $E_{G,\mathcal{T}}(f_1,f_2)$ . The color bar indicates that lighter shades + +Table 8: Zero-shot test accuracy of GINE-STAGE and GINE-age on the social network datasets. Models are trained on Friendster and zero-shot tested on Pokec. GINE-STAGE outperforms GINE-age, demonstrating that STAGE effectively utilizes more information than common attribute (age) shared between attribute domains. + +
ModelAccuracy (↑)
GINE-price0.582 ± 0.0657
GINE-STAGE (Ours)0.652 ± 0.0042
+ +![](images/8f21745fbc7b486dc22230d31675b6af540d387ac65cea5708b5a550b9f59fee.jpg) +Figure 6: Saliency heatmap of the product attribute pairs in the H&M dataset. A lighter color indicates a larger gradient of the model's outputs w.r.t. the pair of attributes, hence the attribute pair is more relevant to the model's predictions. attributes that describes the product's type and categories – “product type”, “department”, “section”, “garment group” – generally are more relevant, whereas product indexes – “product index”, “product index group” – are least relevant to the model's predictions. + +correspond to larger saliency values, signifying a greater impact on the model's output. Surprisingly, we find that certain attribute pairs exhibit high saliency values, such as "product type", "department", "section", and "garment group" because these attributes form a natural hierarchical taxonomy that effectively narrows down and defines specific products. A product's identity is progressively constrained through this hierarchy, from department (e.g., Ladies), to section (e.g., Clothes), to garment group (e.g., Dresses), to specific product type (e.g., Cocktail Dress). Each level in this taxonomy provides increasingly specific product categorization, making these attribute pairs particularly informative for product identification and classification. This is notable because these attributes are not explicitly labeled or weighted in our dataset; instead, STAGE has learned to recognize their importance while pretrained on the different attribute space of E-commerce. + +In contrast, attribute pairs like "product index" and "product index group" demonstrate low saliency values. This makes sense, as these attributes are arbitrary numerical identifiers that carry no semantic meaning about the product's characteristics, intended use, or target demographic. Unlike meaningful attributes that describe product properties, these index values are simply database artifacts used for internal record-keeping. However, the fact that STAGE can distinguish between relevant and irrelevant attributes without explicit guidance is a testament to its ability to capture subtle patterns in the data. + +These results are particularly remarkable because they emerge from zero-shot inference, where all attributes are unseen and no fine-tuning has been performed. This suggests that STAGE is capable of generalizing to new domains and tasks, even when faced with unfamiliar attribute sets. Our findings reinforce our main experimental results in Section 4, providing further evidence of STAGE's ability to capture pertinent attribute relationships for effective task performance zero-shot. + +Table 9: Average per-epoch training and inference time on E-Commerce dataset and zero-shot Hits@1 performance of NBFNet-STAGE and baselines on H&M dataset. Time is measured on an 80GB A100 GPU and averaged across 3 training epochs. NBFNet-STAGE is $7.83\%$ slower in training than the fastest baseline NBFNet-llm and $17.02\%$ slower in inference than the fastest baseline NBFNet-raw, while being respectively $103\%$ and $933\%$ better. + +
ModelWall Time per Training Epoch (s)Inference Time on Test (s)Zero-shot Hits@1 on H&M
NBFNet-raw318.6566.160.0005 ± 0.0004
NBFNet-gaussian322.1369.040.0925 ± 0.0708
NBFNet-structural322.3169.780.2231 ± 0.0060
NBFNet-llm316.5567.570.2302 ± 0.0015
NBFNet-normalized316.8768.360.2286 ± 0.0010
NBFNet-STAGE (Ours)341.3677.420.4666 ± 0.0020
+ +# H. Complexity Analysis and Runtime Comparison + +# H.1. STAGE Time Complexity + +Here we analyze the time complexity of STAGE. In particular, we analyze NBFNet-STAGE (used for link prediction) and GINE-STAGE (used for node classification). + +Let $d$ be the number of attributes, $h$ the dimension of internal node and edge embeddings, $|E|$ the number of edges, and $|V|$ the number of nodes in the input graph. For all tasks, STAGE consists of three steps: + +1. Fully Connected STAGE-edge-graph Construction: This step requires $O(|E|d^2)$ operations because each fully connected STAGE-edge-graph has $2d$ nodes, and each edge in the original graph induces a fully connected STAGE-edge-graph. +2. Inference on STAGE-edge-graphs: We use 2 shared layers of GINE for all STAGE-edge-graphs. A single layer on one fully connected STAGE-edge-graph has complexity $O(dh + d^2 h) = O(d^2 h)$ , since we have $2d$ $h$ -dimensional nodes and $(2d)^2$ $h$ -dimensional edges in each STAGE-edge-graph. Obtaining edge embeddings across all STAGE-edge-graphs takes $O(|E|d^2 h)$ . +3. Inference on the original graph: For link prediction tasks, we use NBFNet to perform message passing on the original graph, which requires $O(|E|h + |V|h^2)$ for one forward pass (where one forward pass gives representations conditioned on a single source node and relation, therefore, predicting links (s, q, ?) for a given source node s and relation q) (Zhu et al., 2021c). For node classification tasks, we use GINE again, which requires $O(|E|h)$ time. + +Hence, in total, running one forward pass has a complexity of $O(|E|d^2h + |E|h + |V|h^2)$ for NBFNet-STAGE, and $O(|E|d^2h + |E|h)$ for GINE-STAGE. + +# H.2. Training and inference time comparison + +The analysis above shows the theoretical complexity of STAGE. Now we study the computational overhead when deployed in practice. To this end, we measured the average wall time per training epoch of NBFNet-STAGE and GINE-STAGE on respectively the E-Commerce Stores dataset and the Friendster dataset (see Appendix C) as well as their average inference time on the H&M dataset and the Pokec dataset using an 80GB A100 GPU. + +Tables 9 and 10 displays the runtime comparison results. We observe that in train NBFNet-STAGE is $7.83\%$ slower than the fastest baseline (NBFNet-llm) and GINE-STAGE is $49.7\%$ slower than the fastest baseline (GINE-gaussian), while being respectively $103\%$ and $933\%$ better in zero-shot Hit@1 on H&M. During inference, NBFNet-STAGE is $17.02\%$ slower than the fastest baseline (NBFNet-raw) and GINE-STAGE is $61.8\%$ slower than the fastest baseline (GINE-structural), while being respectively $11\%$ and $16\%$ better in zero-shot accuracy on Pokec. The additional time is due to computing STAGE-edge-graph embeddings during each forward pass, while building the STAGE-edge-graphs is a one-time pre-processing step. Moreover, Table 10 shows that GINE-STAGE achieves $1.74\times$ speedup in training and $4.01\times$ speedup in inference than the best baseline GraphAny, which is specifically designed for the same tasks. + +Table 10: Average per-epoch training and inference time on Friendster and zero-shot accuracy of GINE-STAGE and baselines on Pokec. Time is measured on an 80GB A100 GPU and averaged across 20 training epochs. GINE-STAGE is $49.7\%$ slower in training than the fastest baseline GINE-gaussian and $61.8\%$ slower in inference than the fastest baseline GINE-structural, while being respectively $11\%$ and $16\%$ better. Moreover, GINE-STAGE achieves $1.74 \times$ speedup in training and $4.01 \times$ speedup in inference than the best baseline GraphAny. + +
ModelWall Time per Training Epoch (s)Inference Time on Test (s)Zero-shot Accuracy on Poked
GINE-raw0.03130.00670.558 ± 0.0829
GINE-gaussian0.02960.00610.588 ± 0.0250
GINE-structural0.02920.00550.564 ± 0.0466
GINE-llm0.03220.00640.550 ± 0.0368
GINE-normalized0.03160.00720.541 ± 0.0148
GraphAny0.07620.03570.591 ± 0.0083
GCOPE1.05240.36190.535 ± 0.0153
GINE-STAGE (Ours)0.04370.00890.652 ± 0.0042
+ +# I. Additional Discussion of Related Work + +Foundation Models for Graph Data. Foundation models for graph data aim to create versatile graph models capable of generalizing across different graphs and tasks. Despite growing interest, achieving a truly universal graph foundation model remains challenging, especially due to the complexities in designing a suitable graph vocabulary that ensures transferability across datasets and tasks (Mao et al., 2024). Initial efforts in this direction convert attributed graphs into texts and apply an LLM, but this methodology, while promising, risks information loss and may limit transferability (Collins et al., 2024; Gruver et al., 2024; Schwartz et al., 2024). For instance, OFA (Liu et al., 2024) uses frozen LLMs to generate attributes, and then trains a GNN to perform multiple tasks, while Chen et al. (2024b); Fatemi et al. (2024); Perozzi et al. (2024) explores the potential of LLMs as predictors or enhancers of graph-based predictions. Other methods, like LLaGA (Chen et al., 2024a) and GraphGPT (Tang et al., 2024), use instruction tuning to map graph data into the LLM embedding space. Similarly, Graphtext (Zhao et al., 2023) and Unigraph (He & Hooi, 2024) adopt NLP techniques, with Graphtext (Zhao et al., 2023) translating graphs into natural language via a syntax tree encapsulating node attributes and inter-node relationships, and Unigraph (He & Hooi, 2024) learning a unified graph tokenizer in a self-supervised fashion to generalize across different attribute domains. Prodigy (Huang et al., 2023) further encodes textual attributes with an LLM and leverages prompt-based graph representations for task generalization. + +In contrast, recent approaches forgo LLMs entirely. For instance, Xia & Huang (2024) employs projections of the attribute matrix to handle shifts to new datasets. We exclude this method along with two similar recent approaches (Yu et al., 2024; Zhao et al., 2024a) from our baseline comparisons. This is because these methods' end-to-end architectures for cross-domain graph pre-training employ SVD-based dimensionality reduction primarily as an input processing step, similar to our projection method for unified input space. Given our focus on understanding the effectiveness of input space unification techniques, we study this component in isolation (c.f. the baseline raw in Section 4) rather than comparing against their full architectures which include additional mechanisms like domain tokens and coordinators. Lachi et al. (2024) employs a Perceiver-based encoder to compress domain-specific attributes into a shared latent space. However, since their method requires finetuning when adapting to unseen out-of-distribution datasets, it falls outside our focus on zero-shot generalization capabilities without additional training. Zhao et al. (2024b) proposes GraphAny, specifically designed for node classification, which models inference on new graphs as an analytical solution to LinearGNNs, and addresses generalization by learning attention scores to fuse predictions from multiple LinearGNNs. However, STAGE outperforms GraphAny in Table 2. Mao et al. (2023) introduces the concept of attribute proximity as a key factor in determining the likelihood of links forming between nodes. Unfortunately, the definition of proximity still depends on the attribute space, making the method unsuitable in our settings of interest. Frasca et al. (2024) proposes "Feature-Structuralization" which converts categorical node features into additional nodes and edges in the graph structure itself. However, adapting this technique would require modification to handle continuous node attributes, making it not directly applicable as a baseline for our work. + +Another line of work addresses zero-shot domain transferability on heterogeneous graphs such as knowledge graphs, where both the nodes (entities) and edge types (relations) may be new and unseen on the test-time graph. For instance, ISDEA+ (Gao et al., 2023) proposes a set aggregation layer over the set of edge-type-specific graph representations to ensure equivariance to edge type permutations. Gao et al. (2023) also proposes a theoretical framework named double + +equivariance that underlies the necessary design principles of models capable of tackling such a task. In contrast, the theoretical framework of our work addresses transferability to unseen attribute domains and proposes a novel connection between statistical tests and the graph regression task. ULTRA (Galkin et al., 2024) and TRIX (Zhang et al., 2024b), on the other hand, build a relation graph that captures the interactions among different edge types, and apply pipelines based on NBFNet (Zhu et al., 2021c) to ensure equivariance to edge type permutations. Similarly, InGram (Lee et al., 2023) also builds a relation graph, but its relation graph differs from ULTRA's in that it computes a set of affinity scores between pairs of relations and use them as edge weights on the relation graph. In comparison, the STAGE-edge-graphs built by our method captures the statistical dependencies among different attribute dimensions of node attributes in the graph. However, all of these methods rely sorely on graph structure and disregard attributes in nodes. In contrast, our work focuses on attributed graphs, which is capable of leveraging important information carried in the node attributes. + +Finally, recently Bevilacqua et al. (2025) introduced HoloGNN, a framework that learns node representations transferable across diverse graph tasks. However, HoloGNN assumes a fixed attribute domain and does not address the challenge of generalizing to unseen datasets with differing attribute spaces, which is the focus of our setting. + +Maximal Invariants and Statistical Testing. Bell (1964) first explored the relationship between invariant and almost-invariant tests in hypothesis testing. Berk & Bickel (1968) and Berk (1970) extended Bell's approach to show that almost-invariant tests are equivalent to invariant ones under certain conditions, which are those met in our work. Later, Berk et al. (1996) explored the interplay between sufficiency and invariance in hypothesis testing by providing counterexamples that demonstrate how these concepts can differ significantly in other scenarios. Recently, Koning & Hemerik (2024) improved the efficiency of hypothesis testing under invariances for large transformation groups such as rotation or sign-flipping without resorting to sampling. \ No newline at end of file diff --git a/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/images.zip b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1caa92bbe42935318ba8991a0a7197ed66dfc5d3 --- /dev/null +++ b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e2df5c55c63ec89c0592fadf293cc4785ceff3858ae627766f22ecaa12429e7 +size 994195 diff --git a/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/layout.json b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3e3f8711b78edc2819cd727f8c1efaadd093b813 --- /dev/null +++ b/ICML/2025/Zero-Shot Generalization of GNNs over Distinct Attribute Domains/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd6ca0ecb999b3c4833d0c284643b3e3227bfcf4f4933226c826e7b2faffc0a3 +size 1485762 diff --git a/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_content_list.json b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..64bb03535f32183b966ea8c0ad06a7339675c00c --- /dev/null +++ b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb41757dc0a1f90a1e799778846915e35186d76d1f01c00230ba4101a7cbaab6 +size 198853 diff --git a/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_model.json b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..16f4259b393cdcbfd52d79638e96e9fac7176124 --- /dev/null +++ b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bc297bc9cdb4f2376023a958c13506b21aac855db033265366eb8559ae7733e +size 243295 diff --git a/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_origin.pdf b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8117d20d5c7776d7a463787e9f0b8cec8c775a80 --- /dev/null +++ b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/8aedc65a-0768-45dc-94d5-6be5947ddcd7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e363cc7ae6b7a9fbe1457d79422fe62874b5f8b2cc4675fc64a89b9d60802115 +size 17572026 diff --git a/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/full.md b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e562beedce4a5c216c340dbcdd7fc942147ddac7 --- /dev/null +++ b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/full.md @@ -0,0 +1,831 @@ +# Zero-Shot Offline Imitation Learning via Optimal Transport + +Thomas Rupf12 Marco Bagatella23 Nico Gurtler12 Jonas Frey23 Georg Martius12 + +# Abstract + +Zero-shot imitation learning algorithms hold the promise of reproducing unseen behavior from as little as a single demonstration at test time. Existing practical approaches view the expert demonstration as a sequence of goals, enabling imitation with a high-level goal selector, and a low-level goal-conditioned policy. However, this framework can suffer from myopic behavior: the agent's immediate actions towards achieving individual goals may undermine long-term objectives. We introduce a novel method that mitigates this issue by directly optimizing the occupancy matching objective that is intrinsic to imitation learning. We propose to lift a goal-conditioned value function to a distance between occupancies, which are in turn approximated via a learned world model. The resulting method can learn from offline, suboptimal data, and is capable of non-myopic, zero-shot imitation, as we demonstrate in complex, continuous benchmarks. The code is available at https://github.com/martius-lab/zilot. + +# 1. Introduction + +The emergence of zero/few-shot capabilities in language modeling (Brown et al., 2020; Wei et al., 2022; Kojima et al., 2022) has renewed interest in generalist agents across all fields in machine learning. Typically, such agents are pretrained with minimal human supervision. At inference, they are capable of generalization across diverse tasks, without further training, i.e. zero-shot. Such capabilities have also been a long-standing goal in learning-based control (Duan et al., 2017). Promising results have been achieved by leveraging the scaling and generalization properties of supervised learning (Jang et al., 2022; Reed et al., 2022; O'Neill et al., 2023; Ghosh et al., 2024; Kim et al., 2024), + +1Universität Tübingen, Tübingen, Germany 2MPI for Intelligent Systems, Tübingen, Germany 3ETH, Zürich, Switzerland. Correspondence to: Thomas Rupf . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +which however rely on large amounts of expert data, usually involving costly human participation, e.g. teleoperation. A potential solution to this issue can be found in reinforcement learning approaches, which enable learning from suboptimal data sources (Sutton & Barto, 2018). Existing methods within this framework ease the burden of learning general policies by limiting the task class to additive rewards (Laskin et al., 2021; Sancaktar et al., 2022; Frans et al., 2024) or single goals (Bagatella & Martius, 2023). + +This work lifts the restrictions of previous approaches, and proposes a method that can reproduce rich behaviors from offline, suboptimal data sources. We allow arbitrary tasks to be specified through a single demonstration devoid of actions at inference time, conforming to a zero-shot Imitation Learning (IL) setting (Pathak et al., 2018; Pirotta et al., 2024). Furthermore, we consider a relaxation of this setting (Pathak et al., 2018), where the expert demonstration may be rough, consisting of an ordered sequence of states without precise time-step information, and partial, meaning each state contains only partial information about the full state. These two relaxations are desirable from a practical standpoint, as they allow a user to avoid specifying information that is either inconsequential to the task or costly to attain (e.g. only through teleoperation). For example, when tasking a robot arm with moving an object along a path, it is sufficient to provide the object's position for a few "checkpoints" without specifying the exact arm pose. + +In principle, a specified goal sequence can be decomposed into multiple single-goal tasks that can be accomplished by goal-conditioned policies, as proposed by recent zero-shot IL approaches (Pathak et al., 2018; Hao et al., 2023). However, we show that this decomposition is prone to myopic behavior when expert demonstrations are partial. Continuing the robotic manipulation example from above, let us consider a task described by two sequential goals, each specifying a certain position that the object should reach. In this case an optimal goal-conditioned policy would attempt to reach the first goal as fast as possible, possibly by throwing the object towards it. The agent would then relinquish control of the object, leaving it in a suboptimal—or even unrecoverable—state. In this case, the agent would be unable to move the object towards the second goal. This myopic behavior is a fundamental issue arising from goal abstraction, as we formally argue in Section + +![](images/6213fe7447cd2a51144748bcb85ae3b44eb4e45efcb9c0912426289b9ee490ce.jpg) +Figure 1. Overview of ZILOT. After learning a world model $\hat{P}$ and a goal-conditioned value function $V$ from offline data (left), a zero-order optimizer directly matches the occupancy of rollouts $\hat{\rho}^{\pi}$ from the learned world model to the occupancy of a single expert demonstration $\hat{\rho}^{E}$ (center). This is done by lifting the goal-conditioned value function to a distance between occupancies using Optimal Transport. The resulting policy displays non-myopic behavior (right). + +![](images/9640543f55129ba724d48329dd92ed4e7d56fed1e79778e84b4c8a4d06ba04d0.jpg) + +![](images/00f983199b9f377687ca16155ba8a84a5342d045a46c973d979252183f74ff6d.jpg) + +3, and results in catastrophic failures in hard-to-control environments, as we demonstrate empirically in Section 5. + +In this work we instead provide a holistic solution to zero-shot offline imitation learning by adopting an occupancy matching formulation. We name our method ZILOT (Zero-shot Offline Imitation Learning from Optimal Transport). We utilize Optimal Transport (OT) to lift the state-goal distance inherent to GC-RL to a distance between the expert's and the policy's occupancies, where the latter is approximated by querying a learned world model. Furthermore, we operationalize this distance as an objective in a standard fixed horizon MPC setting. Minimizing this distance leads to non-myopic behavior in zero-shot imitation. We verify our claims empirically by comparing our planner to previous zero-shot IL approaches across multiple robotic simulation environments, down-stream tasks, and offline datasets. + +# 2. Preliminaries + +# 2.1. Imitation Learning + +We model an environment as a controllable Markov Chain $^{1}$ $\mathcal{M} = (\mathcal{S},\mathcal{A},P,\mu_0)$ , where $\mathcal{S}$ and $\mathcal{A}$ are state and action spaces, $P:S\times \mathcal{A}\to \Omega (\mathcal{S})^{2}$ is the transition function and $\mu_0\in \Omega (\mathcal{S})$ is the initial state distribution. In order to allow for partial demonstrations, we additionally define a goal space $\mathcal{G}$ and a surjective function $\phi :\mathcal{S}\rightarrow \mathcal{G}$ which maps each state to its abstract representation. To define "goal achievement", we assume the existence of a goal metric $h$ on $\mathcal{G}$ that does not need to be known. We then regard state $s\in \mathcal{S}$ as having achieved goal $g\in \mathcal{G}$ if we have $h(\phi (s),g) < \epsilon$ for some fixed $\epsilon >0$ . For each policy $\pi :\mathcal{S}\to \Omega (\mathcal{A})$ , we can measure the (undiscounted) $N$ -step state and goal occupancies respectively as + +$$ +\varrho_ {N} ^ {\pi} (s) = \frac {1}{N + 1} \sum_ {t = 0} ^ {N} \Pr [ s = s _ {t} ] \tag {1} +$$ + +and + +$$ +\rho_ {N} ^ {\pi} (g) = \frac {1}{N + 1} \sum_ {t = 0} ^ {N} \Pr [ g = \phi \left(s _ {t}\right) ], \tag {2} +$$ + +where $s_0 \sim \mu_0, s_{t+1} \sim P(s_t, a_t)$ and $a_t \sim \pi(s_t)$ . These quantities are particularly important in the context of imitation learning. We refer the reader to (Liu et al., 2023) for a full overview over IL settings, and limit this discussion to offline IL with rough and partial expert trajectories. It assumes access to two datasets: $\mathcal{D}_{\beta} = (s_0^i, a_0^i, s_1^i, a_1^i, \ldots)_{i=1}^{|\mathcal{D}_{\beta}|}$ consisting of full state-action trajectories from $\mathcal{M}$ and $\mathcal{D}_E = (g_0^i, g_1^i, \ldots)_{i=1}^{|\mathcal{D}_E|}$ containing demonstrations of an expert in the form of goal sequences, not necessarily abiding to the dynamic of $\mathcal{M}$ . Note that both datasets do not have reward labels. The goal is to train a policy $\pi$ that imitates the expert, which is commonly formulated as matching goal occupancies + +$$ +\rho_ {N} ^ {\pi} \stackrel {D} {=} \rho_ {N} ^ {\pi_ {E}}. \tag {3} +$$ + +The setting we consider in this work is zero-shot offline IL which imposes two additional constraints on offline IL. First, $\mathcal{D}_E$ is only available at inference time, which means pre-training has to be task-agnostic. We further assume $\mathcal{D}_E$ consists of a single trajectory $(g_0,\dots ,g_m) = g_{0:m}$ . Second, at inference, the agent should imitate $\pi_E$ , with a "modest compute-overhead" (Pathak et al., 2018; Touati & Ollivier, 2021; Pirotta et al., 2024). In practice, imitation of unseen trajectories should be order of magnitudes cheaper than IL from scratch, and largely avoid costly operations (e.g. network updates). + +# 2.2. Optimal Transport + +In the field of machine learning, it is often of interest to match distributions, i.e. find some probability measure $\mu$ that resembles some other probability measure $\nu$ . In recent years there has been an increased interest in Optimal Transportation (OT) (Amos et al., 2023; Haldar et al., 2022; + +![](images/952b08e892474214862bb42060b68f72ba81f4559e7a43fc73ad19213be7c63e.jpg) +Figure 2. An example of Optimal Transport between the discrete approximation $\hat{\mu},\hat{\nu}$ of two Gaussians $\mu ,\nu$ . The cost matrix $C$ consists of the point-wise costs where the cost here is the Euclidian distance. A coupling matrix $T\in \mathcal{U}(\hat{\mu},\hat{\nu})$ (middle) is visualized through lines representing the matching (right). + +Bunne et al., 2023; Pooladian et al., 2024). As illustrated in figure 2, OT does not only compare probability measures in a point-wise fashion, like $f$ -Divergences such as the Kullback-Leibler Divergence ( $D_{\mathrm{KL}}$ ), but also incorporates the geometry of the underlying space. This also makes OT robust to empirical approximation (sampling) of probability measures ((Peyre & Cuturi, 2019), p.129). Formally, OT describes the coupling $\gamma \in \mathcal{P}(\mathcal{X} \times \mathcal{Y})$ of two measures $\mu \in \mathcal{P}(\mathcal{X}), \nu \in \mathcal{P}(\mathcal{Y})$ with minimal transportation cost w.r.t. some cost function $c: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ . The primal Kantorovich form is given as the optimization problem + +$$ +\mathrm {O T} _ {c} (\mu , \nu) = \inf _ {\gamma \in \mathcal {U} (\mu , \nu)} \int_ {\mathcal {X} \times \mathcal {Y}} c (x _ {1}, x _ {2}) d \gamma (x _ {1}, x _ {2}) \quad (4) +$$ + +where the optimization is over all joint distributions of $\mu$ and $\nu$ denoted as $\gamma \in \mathcal{U}(\mu, \nu)$ (couplings). If $\mathcal{X} = \mathcal{Y}$ and $(\mathcal{X}, c)$ is a metric space then for $p \in \mathbb{N}$ , $W_p^p = \mathrm{OT}_{c^p}$ is called the Wasserstein- $p$ distance which was shown to be a metric on the subset of measures on $\mathcal{X}$ with finite $p$ -th moments (Clement & Desch, 2008). + +Given samples $x_{1},\ldots ,x_{n}\sim \mu$ and $y_{1},\ldots ,y_{m}\sim \nu$ the discrete OT problem between the discrete probability measures $\hat{\mu} = \sum_{i = 1}^{n}a_{i}\delta_{x_{i}}$ and $\hat{\nu} = \sum_{j = 1}^{m}b_{j}\delta_{y_{j}}$ can be written as a discrete version of equation 4, namely + +$$ +\begin{array}{l} \mathrm {O T} _ {c} (\hat {\mu}, \hat {\nu}) = \min _ {\boldsymbol {T} \in \mathcal {U} (\boldsymbol {a}, \boldsymbol {b})} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {m} c \left(x _ {i}, y _ {j}\right) T _ {i j} (5) \\ = \min _ {\boldsymbol {T} \in \mathcal {U} (\boldsymbol {a}, \boldsymbol {b})} \left\langle \boldsymbol {C}, \boldsymbol {T} \right\rangle (6) \\ \end{array} +$$ + +with the cost matrix $C_{ij} = c(x_i, y_j)$ . The marginal constraints can now be written as $\mathcal{U}(a, b) = \{T \in \mathbb{R}^{n \times m} : T \cdot \mathbf{1}_m = a \text{ and } T^\top \cdot \mathbf{1}_n = b\}$ . This optimization problem can be solved via Linear Programming. Furthermore, Cuturei (2013) shows that the entropically regularized version, commonly given as $\mathrm{OT}_{c,\eta}(\hat{\mu}, \hat{\nu}) = \min_{T \in \mathcal{U}(a, b)} \langle C, T \rangle - \eta D_{\mathrm{KL}}(T, ab^\top)$ , can be efficiently solved in its dual form using Sinkhorn's algorithm (Sinkhorn & Knopp, 1967). + +# 2.3. Goal-conditioned Reinforcement Learning + +As techniques from the literature will be recurring in this work, we provide a short introduction to fundamental ideas + +in GC-RL. We can introduce this framework by enriching the controllable Markov Chain $\mathcal{M}$ . We condition it on a goal $g\in \mathcal{G}$ and cast it as an (undiscounted) Markov Decision Process $\mathcal{M}_g = (\mathcal{S}\cup \{\bot \} ,\mathcal{A},P_g,\mu_0,R_g,T_{\max})$ . Compared to the reward-free setting above, the dynamics now include a sink-state $\perp$ upon goal-reaching and a reward of $-1$ until this happens: + +$$ +P _ {g} (s, a) = \left\{ \begin{array}{l l} P (s, a) & \text {i f} h (\phi (s), g) \geq \epsilon \\ \delta_ {\perp} & \text {o t h e r w i s e} \end{array} \right. \tag {7} +$$ + +$$ +R _ {g} (s, a) = \left\{ \begin{array}{l l} - 1 & \text {i f} h (\phi (s), g) \geq \epsilon \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {8} +$$ + +where $\delta_x$ stands for the probability distribution assigning all probability mass to $x$ . + +We can now define the goal-conditioned value function as + +$$ +V ^ {\pi} \left(s _ {0}, g\right) = \underset {\mu_ {0}, P _ {g}, \pi} {\mathbb {E}} \left[ \sum_ {t = 0} ^ {T _ {\max }} R _ {g} \left(s _ {t}, a _ {t}\right) \right] \tag {9} +$$ + +where $s_0 \sim \mu_0, s_{t+1} \sim P_g(s_t, a_t), a_t \sim \pi(s_t, g)$ . The optimal goal-conditioned policy is then $\pi^{\star} = \arg \max_{\pi} \mathbb{E}_{g \sim \mu_{\mathcal{G}}, s \sim \mu_0} V^{\pi}(s_0; g)$ for some goal distribution $\mu_{\mathcal{G}} \in \Omega(\mathcal{G})$ . Intuitively, the value function $V^{\pi}(s, g)$ corresponds to the negative number of expected steps that $\pi$ needs to move from state $s$ to goal $g$ . Thus the distance $d = -V^{\star}$ corresponds to the expected first hit time. If no goal abstraction is present, i.e. $\phi = \mathrm{id}_S$ , then $(S, d)$ is a quasimetric space (Wang et al., 2023), i.e. $d$ is non-negative and satisfies the triangle inequality. Note, though, that $d$ does not need be symmetric. + +# 3. Goal Abstraction and Myopic Planning + +The distribution matching objective at the core of IL problems is in general hard to optimize. For this reason, traditional methods for zero-shot IL leverage a hierarchical decomposition into a sequence of GC-RL problems (Pathak et al., 2018; Hao et al., 2023). We will first describe this approach, and then show how it potentially introduces myopic behavior and suboptimality. + +In the pretraining phase, Pathak et al. (2018) propose to train a goal-conditioned policy $\pi_g: S \times \mathcal{G} \to \mathcal{A}$ on reaching single goals and a goal-recognizer $C: S \times \mathcal{G} \to \{0, 1\}$ that detects whether a given state achieves the given goal. Given an expert demonstration $g_{1:M}$ and an initial state $s_0$ , imitating the expert can then be sequentially decomposed into $M$ goal-reaching problems, and solved with a hierarchical agent consisting of two policies. On the lower level, $\pi_g$ chooses actions to reach the current goal; on the higher level, $C$ decides whether the current goal is achieved and $\pi_g$ should target the next goal in the sequence. + +![](images/9b792e6203eff4aa0b3cae3fb0fa6e7c2dac905209cfc89c213580155cdb2098.jpg) +Figure 3. Controllable Markov Chain with $\phi : (x, y) \mapsto x$ . + +We define the pre-image $\phi^{-1}(g) = \{s\in S:\phi (s) = g\}$ as the set of all states that map to a goal, and formalize the suboptimality of the above method under goal abstraction as follows. + +Proposition 3.1. Let us define the optimal classifier $C(s, g) = \mathbf{1}_{h(\phi(s), g) < \epsilon}$ . Given a set of visited states $\mathcal{P} \subseteq S$ , the current state $s \in \mathcal{P}$ , and a goal sequence $g_{1:M} \in \mathcal{G}^M$ , let the optimal hierarchical policy be $\pi_h^\star(s) = \pi^\star(s, g_{i+1})$ , where $i$ is the smallest integer such that there exist a state $s_p \in \mathcal{P}$ with $h(\phi(s_p), g_i) < \epsilon$ , and $i = 0$ otherwise. There exists a controllable Markov Chain $\mathcal{M}$ and a realizable sequence of goals $g_{0:M}$ such that, under a suitable goal abstraction $\phi(\cdot)$ , $\pi_h^\star$ will not reach all goals in the sequence, i.e., $\rho_N^{\pi_h^\star}(g_i) = 0$ for some $i \in [0, \dots, M]$ and all $N \in \mathbb{N}$ . + +Proof. Consider the Markov Chain $\mathcal{M}$ depicted in figure 3 with goal abstraction $\phi : (x, y) \mapsto x$ and $p > 0$ . Now, consider the goal sequence $(g_0, g_1, g_2) = (0, 1, 2)$ , which can only be achieved, by a policy taking action $a_1$ in the initial state $s_0 = (0, 0)$ . Consider $\pi_h^\star$ in $s_0$ , with $\mathcal{P} = \{s_0\}$ . The smallest integer $i$ such that $h(\phi(s_0), g_i) < \epsilon$ is $i = 0$ , therefore $\pi_h^\star(s_0) = \pi^\star(s_0, g_1)$ . We can then compare the state-action values $Q$ in $s_0$ : + +$$ +\begin{array}{l} Q ^ {\pi^ {\star} (\cdot , g _ {1})} \left(s _ {0}, a _ {1}, g _ {1}\right) = \sum_ {t = 0} ^ {T _ {\max }} - p ^ {t} = - 1 \cdot \frac {1 - p ^ {\left(T _ {\max } + 1\right)}}{1 - p} \\ < - 1 = Q ^ {\pi^ {*} (\cdot , g _ {1})} \left(s _ {0}, a _ {0}, g _ {1}\right). \tag {10} \\ \end{array} +$$ + +This implies that $\pi_h^\star (s_0) = \pi^\star (s_0,1) = a_0$ . The next state visited by $\pi_h^\star$ will always be $(1,0)$ , from which $(2,1)$ is not reachable, and $g_{2}$ is not achievable. We thus have $\rho_N^{\pi_h^*}(g_2) = 0$ for all $N\in \mathbb{N}$ . + +We remark that this issue arises in the presence of goal abstraction which plays a vital role in the partial demonstration setting we consider. Without goal abstraction, i.e., if each goal is fully specified, there is no leeway in how to achieve it for the policy (assuming $\epsilon \to 0$ as well). Nevertheless, goal abstraction is ubiquitous in practice (Schaul et al., 2015) and necessary to enable learning in complex environments (Andrychowicz et al., 2017). + +# 4. Optimal Transport for Zero-Shot IL + +Armed with recent tools in value estimation, model-based RL and trajectory optimization, we propose a method for zero-shot offline imitation learning that directly optimizes the occupancy matching objective, introducing only minimal approximations. As a result, the degree of myopia is greatly reduced, as we show empirically in section 5. + +In particular, we propose to solve the occupancy matching problem in equation 3 by minimizing the Wasserstein-1 metric $W_{1}$ with respect to goal metric $h$ on the goal space $\mathcal{G}$ , i.e. + +$$ +W _ {1} \left(\rho_ {N} ^ {\pi}, \rho_ {N} ^ {E}\right) = \mathrm {O T} _ {h} \left(\rho_ {N} ^ {\pi}, \rho_ {N} ^ {E}\right). \tag {11} +$$ + +This objective involves two inaccessible quantities: goal occupancies $\rho_N^\pi, \rho_N^E$ , as well as the goal metric $h$ . Our key contribution lies in how these quantities can be practically estimated, enabling optimization of the objective with scalable deep RL techniques. + +Occupancy Estimation Since the expert's and the policy's occupancy are both inaccessible, we opt for discrete, sample-based approximations. In the case of the expert occupancy $\rho_N^E$ , the single trajectory provided at inference $(g_0,\dots ,g_M)$ represents a valid sample from it, and we use it directly. For an arbitrary agent policy $\pi$ , we use a discrete approximation after training a dynamics model $\hat{P}\approx P$ on $\mathcal{D}_{\beta}$ , which can be done offline through standard supervised learning. We can then approximate $\rho_N^\pi$ by jointly rolling out the learned dynamics model and the policy $\pi$ . We thus get the discrete approximations + +$$ +\rho_ {N} ^ {E} \approx \hat {\rho} _ {M} ^ {E} = \frac {1}{M + 1} \sum_ {j = 0} ^ {M} \delta_ {g _ {j}} \quad \text {a n d} \tag {12} +$$ + +$$ +\rho_ {N} ^ {\pi} \approx \hat {\rho} _ {N} ^ {\pi} = \frac {1}{N + 1} \sum_ {t = 0} ^ {N} \delta_ {\phi (s _ {t})} \tag {13} +$$ + +where for the latter we sample + +$$ +s _ {0} \sim \mu_ {0}, s _ {t + 1} \sim \hat {P} (s _ {t}, a _ {t}), a _ {t} \sim \pi (s _ {t}). \tag {14} +$$ + +Similarly, we can also obtain an estimate for the state occupancy of $\pi$ as $\varrho_N^\pi \approx \hat{\varrho}_N^\pi = \frac{1}{N + 1}\sum_{t = 0}^N\delta_{s_t}$ . + +Metric Approximation As $h$ may be unavailable or hard to specify in practical settings, we propose to train a goal-conditioned value function $V^{\star}$ from the offline data $\mathcal{D}_{\beta}$ and use the distance $d(s,g) = -V^{\star}(s,g)$ (i.e. the learned first hit time) as a proxy. For a given state-goal pair $(s,g)$ , this corresponds to the approximation $d(s,g) \approx h(\phi (s),g)$ . It is easy to show that a minimizer of $h(\phi (\cdot),g)$ also minimizes $d(\cdot ,g)$ . Using $d$ also has the benefit of incorporating the dynamics of the MDP into the cost of the OT problem. The + +use of this distance has seen some use as the cost function in Wasserstein metrics between state occupancies in the past (Durugkar et al., 2021). As we show in section 5.3, $d$ is able to capture potential asymmetries in the MDP, while remaining informative of $h$ . We note that, while $h: \mathcal{G} \times \mathcal{G} \to \mathbb{R}$ is a distance in goal-space, $d: \mathcal{S} \times \mathcal{G} \to \mathbb{R}$ is a distance between states and goals. Nonetheless, $d$ remains applicable as the policy's occupancy can also be estimated in state spaces as $\hat{\varrho}_N^\pi$ . Given the above considerations, we can rewrite our objective as the discrete optimal transport problem + +$$ +\pi^ {\star} = \underset {\pi} {\arg \min } \mathrm {O T} _ {d} \left(\hat {\varrho} _ {N} ^ {\pi}, \hat {\rho} _ {M} ^ {E}\right). \tag {15} +$$ + +Optimization Having addressed density and metric approximations, we now focus on optimizing the objective in equation 15. Fortunately, as a discrete OT problem, the objective can be evaluated efficiently using Sinkhorn's algorithm when introducing entropic regularization with a factor $\eta$ (Cuturi, 2013; Peyré & Cuturi, 2019). A non-Markovian, deterministic policy optimizing the objective at state $s_k \in S$ can be written as + +$$ +\begin{array}{l} \pi \left(s _ {0: k}, g _ {0: m}\right) \tag {16} \\ \approx \arg \min _ {a _ {k}} \min _ {a _ {k + 1: N - 1}} \mathrm {O T} _ {d, \eta} \left(\frac {1}{N + 1} \sum_ {i = 0} ^ {N} \delta_ {s _ {i}}, \frac {1}{M + 1} \sum_ {j = 0} ^ {M} \delta_ {g _ {j}}\right) \\ \end{array} +$$ + +where $s_{0:k}$ are the states visited so far and $s_{k+1:N}$ are rolled out using the learned dynamics model $\hat{P}$ and actions $a_{k:N-1}$ . Note that while $s_{0:k}$ are part of the objective, they are constant and are not actively optimized. + +Intuitively, this optimization problem corresponds to finding the first action from a sequence $(a_{k:N-1})$ that minimizes the OT costs between the empirical expert goal occupancy, and the induced empirical policy state occupancy. This type of optimization problem fits naturally into the framework of planning with zero-order optimizers and learned world models (Chua et al., 2018; Ha & Schmidhuber, 2018); while these algorithms are traditionally used for additive costs, the flexibility of zero-order optimizers (Rubinstein & Kroese, 2004; Williams et al., 2015; Pinneri et al., 2020) allows a straightforward application to our problem. The objective in equation 17 can thus be directly optimized with CEM variants (Pinneri et al., 2020) or MPPI (Williams et al., 2015), in a model predictive control (MPC) fashion. + +Like for other MPC approaches, we are forced to plan for a finite horizon $H$ , which might be smaller than $N$ , because of imperfections in the learned dynamics model or computational constraints. This is referred to as receding horizon control (Datko, 1969). When the policy rollouts used for computing $\hat{\varrho}_N^\tau$ are truncated, it is also necessary to truncate the goal sequence to exclude any goals that cannot be + +reached within $H$ steps. To this end, we train an extra value function $W$ that estimates the number of steps required to go from one goal to the next by regressing onto $V$ , i.e. by minimizing $\mathbb{E}_{s,s^{\prime}\sim \mathcal{D}_{\beta}}[(W(\phi (s);\phi (s^{\prime})) - V(s;\phi (s^{\prime})))^{2}]$ . For $i\in [0,\dots ,M]$ , we can then estimate the time when $g_{i}$ should be reached as + +$$ +t _ {i} \approx - V \left(s _ {0}; g _ {0}\right) - \sum_ {j = 1} ^ {i} W \left(g _ {j - 1}; g _ {j}\right). \tag {17} +$$ + +We then simply truncate the online problem to only consider goals relevant to $s_1, \ldots, s_{k + H}$ , i.e. $g_0, \ldots, g_K$ where $K = \min \{j : t_j \geq k + H\}$ . We note that this approximation of the infinite horizon objective can potentially result in myopic behavior if $K < M$ ; nonetheless, optimal behavior is recovered as the effective planning horizon increases. Algorithm 1 shows how the practical OT objective is computed. + +# Algorithm 1 OT cost computation for ZILOT + +Require: Pretrained GC value functions $V, W$ and dynamics model $\hat{P}$ ; horizon $H$ , solver iterations $r$ and regularization factor $\eta$ . + +Initialization: State $s_0$ and expert trajectory $g_{1:M}$ , precomputed $t_{0:M}$ + +input State history $s_{0:k}$ , future actions $a_{k:k + H - 1}$ {Rollout learned dynamics} + +$$ +s _ {k + 1: k + H} \leftarrow \text {r o l l o u t} (P, s _ {k}, a _ {k: k + H - 1}) +$$ + +{Compute which goals are reachable} + +$$ +K \leftarrow \min \{j: t _ {j} \geq k + H \} +$$ + +{Compute cost matrix} + +$$ +C _ {i j} \leftarrow - V \left(s _ {i}; g _ {j}\right), i \in \{0, \dots , k + H \}, j \in \{0, \dots , K \} +$$ + +{Compute uniform marginals} + +$$ +\boldsymbol {a} \leftarrow \frac {1}{k + H + 1} \mathbf {1} _ {k + H + 1}, \boldsymbol {b} \leftarrow \frac {1}{K + 1} \mathbf {1} _ {K + 1} +$$ + +{Run Sinkhorn Algorithm} + +$$ +\boldsymbol {T} \leftarrow \text {s i n k h o r n} (\boldsymbol {a}, \boldsymbol {b}, \boldsymbol {C}, r, \eta) +$$ + +output $\sum_{ij}T_{ij}C_{ij}$ {Return OT cost} + +Implementation The method presented relies solely on three learned components: a dynamics model $\hat{P}$ , and the state-goal and goal-goal GC value functions $V$ and $W$ . All of them can be learned offline from the dataset $\mathcal{D}_{\beta}$ . In practice, we found that several existing deep reinforcement learning frameworks can be easily adapted to learn these functions. We adopt TD-MPC2 (Hansen et al., 2024), a state of the art model-based algorithm that has shown promising results in single- and multitask online and offline RL. We note that planning takes place in the latent space constructed by TD-MPC2's encoders. We adapt the method to allow estimation of goal-conditioned value functions, as described in appendix C. We follow prior work (Andrychowicz et al., 2017; Bagatella & Martius, 2023; Tian et al., 2021) and sample goals from the future part of trajectories in $\mathcal{D}_{\beta}$ in order to synthesize rewards without supervision. We note + +that this goal-sampling method also does not require any knowledge of $h$ . + +# 5. Experiments + +This section constitutes an extensive empirical evaluation of ZILOT for zero-shot IL. We first describe our experimental settings, and then present qualitative and quantitative result, as well as an ablation study. We consider a selection of 30 tasks defined over 5 environments, as summarized below and described in detail in appendix A and C. + +fetch (Plappert et al., 2018) is a manipulation suite in which a robot arm either pushes (Push), or lifts (Pick&Place) a cube towards a goal. To illustrate the failure cases of myopic planning, we also evaluate a variation of Push (i.e. Slide), in which the table size exceeds the arm's range, the table's friction is reduced, and the arm is constrained to be touching the table. As a result, the agent cannot fully constrain the cube, e.g. by picking it up, or pressing on it, and the environment strongly punishes careless manipulation. In all three environments, tasks consist of moving the cube along trajectories shaped like the letters "S", "L", and "U". + +halfcheetah (Wawrzyński, 2009) is a classic Mujoco environment where the agent controls a cat-like agent in a 2D horizontal plane. As this environment is not goal-conditioned by default, we choose the x-coordinate and the orientation of the cheetah as a meaningful goal-abstraction. This allows the definition of tasks involving standing up and hopping on front or back legs, as well as doing flips. + +pointmaze (Fu et al., 2021) involves maneuvering a point-mass through a maze via force control. Downstream tasks consist of following a series of waypoints through the maze. + +Planners The most natural comparison is the framework proposed by Pathak et al. (2018), which addresses imitation through a hierarchical decomposition, as discussed in section 3. Both hierarchical components are learned within TD-MPC2: the low-level goal-conditioned policy is by default part of TD-MPC2, while the goal-classifier (Cls) can be obtained by thresholding the learned value function $V$ . We privilege this baseline (Policy+Cls) by selecting the threshold minimizing $W_{\mathrm{min}}$ per environment among the values $\{1,2,3,4,5\}$ . Moreover, we also compare to a version of this baseline replacing the low-level policy with zero-order optimization of the goal-conditioned value function (MPC+Cls), thus ablating any benefits resulting from model-based components. We remark that all MPC methods use the same zero-order optimizer iCEM (Pinneri et al., 2020). We further compare ZILOT to $\mathbf{ER}_{\mathbf{FB}}$ and $\mathbf{RER}_{\mathbf{FB}}$ , two approaches that combine zero-shot RL and reward-based IL using the Forward-Backward (FB) framework (Piotta et al., 2024). We refer the reader to + +![](images/4f285d1ede7347208a2f1e67e2ad9d0458542106e61d027a9579065864611a1f.jpg) +Figure 4. Example tasks in fetch SlideLarge_2D. The left two columns show five trajectories across five seeds of the myopic method MPC+Cls and ZILOT (ours). The trajectories are drawn in the $x - y$ -plane of the goal space and just show the movement of the cube. ZILOT's behavior imitates the given goal trajectories more closely. On the right, we visualize the OT objective at around three quarters of the episode time. It includes both the past and planned future states, as well as their coupling to the goals. Note that planning occurs in the latent state of TD-MPC2, and separately trained decoders are used for this visualization. + +appendix B for a discussion of all FB-IL approaches in our rough and partial setting. + +Metrics We report two metrics for evaluating planner performance. The first one is the minimal encountered (empirical) Wasserstein-1 distance under the goal metric $h$ of the agent's trajectory and the given goal sequence. Formally, given trajectory $(s_0, \ldots, s_N)$ and the goal sequence $(g_0, \ldots, g_M)$ we define $W_{\min}(s_{0:N}, g_{1:M})$ as + +$$ +\min _ {k \in \{0, \dots , N \}} W _ {1} \left(\frac {1}{k + 1} \sum_ {i = 0} ^ {k} \delta_ {\phi \left(s _ {i}\right)}, \frac {1}{M + 1} \sum_ {j = 0} ^ {M} \delta_ {g _ {j}}\right). \tag {18} +$$ + +We introduce a secondary metric "GoalFraction", which represents the fraction of goals that are achieved in the order they were given. Formally, this corresponds to the length of the longest subsequence of achieved goals that matches the desired order. + +# 5.1. Can ZILOT effectively imitate unseen trajectories? + +We first set out to qualitatively evaluate whether the method is capable of imitation in complex environments, despite practical approximations. Figure 4 illustrates how MPC+Cls and ZILOT imitate an expert sliding a cube across the big table of the fetchslide_large_2D environment. The myopic baseline struggles to regain control over the cube after moving it towards the second goal, leading to trajectories that leave the manipulation range. In contrast, ZILOT + +![](images/2aad5bb2a0992301df678228eed84898600c357eae2eb1088822f54593408272.jpg) +Figure 5. Performance comparison of ZILOT and other methods aggregated over environments. Table 1 reports more detailed results. + +plans beyond the second goal. As displayed in the middle part of figure 4, the coupling of the OT problem approximately pairs up each state in the planned trajectory with the appropriate goal, leading to closer imitation of the expert. + +# 5.2. How does ZILOT perform compared to prior methods? + +We provide a quantitative evaluation of ZILOT with respect to the other planners in table 5. For more details we refer the reader to appendix A. As ZILOT directly optimizes a distribution matching objective, it generally reproduces expert trajectories more closely, achieving a lower Wasserstein distance to its distribution. This is especially evident in environments that are very punishing to myopic planning, such as the Fetch Slide environment shown in figure 4. In most environments, our method also out-performs the baselines in terms of the fraction of goals reached. In less punishing environments, ZILOT may sacrifice precision in achieving the next goal exactly for an overall closer match of the expert trajectory which is most clearly visible in the pointmaze environment. We note that the performance of the two myopic baselines $\mathrm{Pi} + \mathrm{Cls}$ and $\mathrm{MPC} + \mathrm{Cls}$ are very similar, suggesting that the performance gap to ZILOT stems from the change in objective, rather than implementation or model-based components. We suspect the origins of the subpar performance of $\mathrm{ER}_{\mathrm{FB}}$ and $\mathrm{RER}_{\mathrm{FB}}$ are two-fold. First, the FB framework (Touati & Ollivier, 2021; Pirotta et al., 2024) has been found to underperform in low-data regimes(Jeen et al., 2024). Second, $\mathrm{ER}_{\mathrm{FB}}$ and $\mathrm{RER}_{\mathrm{FB}}$ use a regularized f-divergence objective, which translates to an RL problem with additive rewards. As (Pirotta et al., 2024) state, this regularization comes at a cost, particularly if states do not contain dynamical information or in ergodic MDPs. In this case, a policy can optimize the reward by remaining in the most likely expert state, yielding a degenerate solution. Conversely, such a solution would be discarded by ZILOT as it uses an unregularized objective. + +# 5.3. What matters for ZILOT? + +To validate some of our design choices we finally evaluate the following versions of our method. + +- OT+unbalanced, our method with unbalanced OT (Liero et al., 2018; Séjourne et al., 2019), which turns the hard marginal constraint $\mathcal{U}$ (see section 2.2) into a soft constraint. We use this method to address the fact that a rough expert trajectory may not necessarily yield a feasible expert occupancy approximation. +- OT+Cls, a version of our method which uses the classifier (Cls) (with the same hyperparameter search) to discards all goals that are recognized as reached. This allows this method to only consider future goals and states in the OT objective. +- $\mathbf{OT} + h$ , our method with the goal metric $h$ on $\mathcal{G}$ as the cost function in the OT problem, replacing $d$ . + +Our results are summarized in figure 6. First, we see that using unbalanced OT does not yield significant improvements. Second, using a goal-classifier can have a bad impact on matching performance. We suspect this is the case because keeping track of the history of states gives a better, more informative, estimate of which part of the expert occupancy has already been fulfilled. Finally, we observe that the goal metric $h$ may not be preferable to $d$ , even if it is available. We mainly attribute this to the fact that, in the considered environments, any action directly changes the state occupancy, but the same cannot be said for the goal occupancy. Since $h$ only allows for the comparison of goal occupancies, the optimization landscape can be very flat in situations where most actions do not change the future state trajectory under goal abstraction, such as the start of fetch tasks as visible in its achieved trajectories in the figures in appendix E. Furthermore, while $h$ is locally accurate, it ignores the global geometry of MDPs, as shown by its poor performance in strongly asymmetric environments (i.e., halfcheetah). + +![](images/49e1c86f7d1553cf1a8b89fede872e27abce9d9286037f5264bedfa1b90cf971.jpg) +Figure 6. Ablation of design choices in ZILOT, including coupling constraints (OT+unbalanced), partial trajectory matching (OT+Cls), and the approximation of $h$ by $d$ (OT+h). For more detailed results, please refer to table 2. + +# 6. Related Work + +Zero-shot IL When a substantial amount of compute is allowed at inference time, several existing methods leverage pretrained models to infer actions, and retrieve an imitator policy via behavior cloning (Pan et al., 2020; Zhang et al., 2023; Torabi et al., 2018). As already discussed in section 3, most (truly) zero-shot methods cast the problem of imitating an expert demonstration as following the sequence of its observations (Pathak et al., 2018; Hao et al., 2023). Expert demonstrations are then imitated by going from one goal to the next using a goal-conditioned policy. In contrast, our work proposes a holistic approach to imitation, which considers all goals within the planning horizon. + +Zero-Shot RL Vast amounts of effort have been dedicated to learning generalist agents without supervision, both on the theoretical (Touati & Ollivier, 2021; Touati et al., 2023) and practical side (Laskin et al., 2021; Mendonca et al., 2021). Among others, (Sancaktar et al., 2022; P. et al., 2021; Bagatella & Martius, 2023) learn a dynamics model through curious exploration and show how it can be leveraged to optimize additive objectives. More recently, Frans et al. (2024) use Functional Reward Encodings to encode arbitrary additive reward functions in a latent that is used to condition a policy. While these approaches are effective in a standard RL setting, they are not suitable to solve instances of global RL problems (Santi et al., 2024) (i.e., distribution matching). One notable exception is the forward-backward framework (Touati & Ollivier, 2021; Pirotta et al., 2024), which we discuss in detail in appendix B. + +Imitation Learning A range of recent work has been focused on training agents that imitate experts from their trajectories by matching state, state-action, or state-negt-state occupancies depending on what is available. These methods either directly optimize various distribution matching objectives (Liu et al., 2023; Ma et al., 2022) or recover a reward using Generative Adversarial Networks (GAN) + +(Ho & Ermon, 2016; Li et al., 2023) or in one instance OT (Luo et al., 2023). Another line of work has shown impressive real-world results by matching the action distributions (Shafiullah et al., 2022; Florence et al., 2021; Chi et al., 2023) directly. All these approaches do not operate in a zero-shot fashion, or need ad-hoc data collection. + +OT in RL Previous works have often used OT as a reward signal in RL. One application is online fine-tuning, where a policy's rollouts are rewarded in proportion to how closely they match expert trajectories (Dadashi et al., 2021; Haldar et al., 2022). Luo et al. (2023) instead use a similar trajectory matching strategy to recover reward labels for unlabelled mixed-quality offline datasets. Most of the works mentioned above rely on simple Cosine similarities and Euclidean distances as cost-functions their OT problems.. + +# 7. Discussion + +In this work, we point out a failure-mode of current zero-shot IL methods that cast imitating an expert demonstration as following a sequence of goals with myopic GC-RL policies. We address this issue by framing the problem as occupancy matching. By introducing discretizations and minimal approximations, we derive an Optimal Transportation problem that can be directly optimized at inference time using a learned dynamics model, goal-conditioned value functions, and zero-order optimizer. Our experimental results across various environments and tasks show that our approach outperforms state-of-the-art zero-shot IL methods, particularly in scenarios where non-myopic planning is crucial. We additionally validate our design choices through a series of ablations. + +From a practical standpoint, our method is mainly limited in its reliance on a world model. As the inaccuracy and computational cost of learned dynamics models increase with the prediction horizon, we are forced to optimize a fixed-horizon objective. This may reintroduce a slight degree of + +myopia that could lead to actions which cause suboptimal behavior beyond the planning horizon. This, however, was not a practical issue in our empirical validation, and we expect our framework to further benefit as the accuracy of learned world models improves. From a theoretical standpoint, ZILOT induces a non-Markovian policy, even when expert trajectories are collected by a Markovian policy, and a Markovian policy would thus be sufficient for imitation. While the space of non-Markovian policies is larger, we find ZILOT to be able to efficiently find a near-optimal policy. This aligns with the fact that several practical zero-shot IL algorithms are based on efficient search over non-Markovian policies (e.g. those based on a goal-reaching policy and a classifier (Pathak et al., 2019; Pirotta et al., 2024)). + +# Impact Statement + +Advancements in imitation learning may lead to more capable robotic systems across a variety of application domains. While such systems could have societal implications depending on their use cases, our contributions are algorithmic rather than domain-specific. + +# Acknowledgments + +We thank Anselm Paulus, Mikel Zhobro, and Núria Armengol Urpí for their help throughout the project. Marco Bagatella and Jonas Frey are supported by the Max Planck ETH Center for Learning Systems. Georg Martius is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645. We acknowledge the support from the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (FKZ: 01IS18039A). Finally, this work was supported by the ERC - 101045454 REAL-RL. + +# References + +Amos, B., Luise, G., Cohen, S., and Redko, I. Meta optimal transport. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 791-813. PMLR, 2023. +Andrychowicz, M., Crow, D., Ray, A., Schneider, J., Fong, R., Welinder, P., McGrew, B., Tobin, J., Abbeel, P., and Zaremba, W. Hindsight experience replay. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. + +5048-5058, 2017. +Bagatella, M. and Martius, G. Goal-conditioned offline planning from curious exploration. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. +Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877-1901. Curran Associates, Inc., 2020. +Bunne, C., Stark, S. G., Gut, G., del Castillo, J. S., Levesque, M., Lehmann, K.-V., Pelkmans, L., Krause, A., and Ratsch, G. Learning single-cell perturbation responses using neural optimal transport. Nature Methods, 20 (11):1759-1768, Nov 2023. ISSN 1548-7105. doi: 10.1038/s41592-023-01969-x. +Chi, C., Feng, S., Du, Y., Xu, Z., Cousineau, E., Burchfiel, B., and Song, S. Diffusion policy: Visuomotor policy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS), 2023. +Chua, K., Calandra, R., McAllister, R., and Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada, pp. 4759-4770, 2018. +Clement, P. and Desch, W. An elementary proof of the triangle inequality for the wasserstein metric. Proceedings of The American Mathematical Society - PROC AMER MATH SOC, 136:333-340, 01 2008. doi: 10.1090/S0002-9939-07-09020-X. +Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Burges, C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. (eds.), Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. +Cuturei, M., Meng-Papaxanthos, L., Tian, Y., Bunne, C., Davis, G., and Teboul, O. Optimal transport tools (ott): + +A jax toolbox for all things Wasserstein. arXiv preprint arXiv:2201.12324, 2022. +Dadashi, R., Hussenot, L., Geist, M., and Pietquin, O. Primal wasserstein imitation learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. +Datko, R. Foundations of optimal control theory (e. bruce lee and lawrence markus). SIAM Rev., 11(1):93-95, January 1969. ISSN 0036-1445. doi: 10.1137/1011020. +Duan, Y., Andrychowicz, M., Stadie, B. C., Ho, J., Schneider, J., Sutskever, I., Abbeel, P., and Zaremba, W. One-shot imitation learning. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1087-1098, 2017. +Durugkar, I., Tec, M., Niekum, S., and Stone, P. Adversarial intrinsic motivation for reinforcement learning. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 8622-8636, 2021. +Eysenbach, B., Zhang, T., Levine, S., and Salakhutdinov, R. Contrastive learning as goal-conditioned reinforcement learning. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. +Flamary, R., Courty, N., Gramfort, A., Alaya, M. Z., Bois-bunon, A., Chambon, S., Chapel, L., Corenflos, A., Fatras, K., Fournier, N., Gautheron, L., Gayraud, N. T., Janati, H., Rakotomamonjy, A., Redko, I., Rolet, A., Schutz, A., Seguy, V., Sutherland, D. J., Tavenard, R., Tong, A., and Vayer, T. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1-8, 2021. +Florence, P., Lynch, C., Zeng, A., Ramirez, O. A., Wahid, A., Downs, L., Wong, A., Lee, J., Mordatch, I., and Tompson, J. Implicit behavioral cloning. In Faust, A., Hsu, D., and Neumann, G. (eds.), Conference on Robot Learning, 8-11 November 2021, London, UK, volume 164 of Proceedings of Machine Learning Research, pp. 158-168. PMLR, 2021. +Frans, K., Park, S., Abbeel, P., and Levine, S. Unsupervised zero-shot reinforcement learning via functional reward + +encodings. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. +Fu, J., Kumar, A., Nachum, O., Tucker, G., and Levine, S. D4RL: Datasets for Deep Data-Driven Reinforcement Learning, 2021. +Ghosh, D., Walke, H., Pertsch, K., Black, K., Mees, O., Dasari, S., Hejna, J., Kreiman, T., Xu, C., Luo, J., Tan, Y. L., Chen, L. Y., Sanketi, P., Vuong, Q., Xiao, T., Sadigh, D., Finn, C., and Levine, S. Octo: An open-source generalist robot policy. CoRR, abs/2405.12213, 2024. doi: 10.48550/ARXIV.2405.12213. +Ha, D. and Schmidhuber, J. World models. CoRR, abs/1803.10122, 2018. +Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1861-1870. PMLR, 10-15 Jul 2018. +Haldar, S., Mathur, V., Yarats, D., and Pinto, L. Watch and match: Supercharging imitation with regularized optimal transport. In Liu, K., Kulic, D., and Ichnowski, J. (eds.), Conference on Robot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand, volume 205 of Proceedings of Machine Learning Research, pp. 32-43. PMLR, 2022. +Hansen, N., Su, H., and Wang, X. TD-MPC2: scalable, robust world models for continuous control. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. +Hao, P., Lu, T., Cui, S., Wei, J., Cai, Y., and Wang, S. Sozil: Self-optimal zero-shot imitation learning. IEEE Transactions on Cognitive and Developmental Systems, 15(4): 2077-2088, 2023. doi: 10.1109/TCDS.2021.3116604. +Ho, J. and Ermon, S. Generative adversarial imitation learning. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. +Jang, E., Irpan, A., Khansari, M., Kappler, D., Ebert, F., Lynch, C., Levine, S., and Finn, C. BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning. In Proceedings of the 5th Conference on Robot Learning, pp. 991-1002. PMLR, 2022. + +Jeen, S., Bewley, T., and Cullen, J. M. Zero-shot reinforcement learning from low quality data. Advances in Neural Information Processing Systems 38, 2024. +Kim, G., Seo, S., Lee, J., Jeon, W., Hwang, H., Yang, H., and Kim, K. Demodice: Offline imitation learning with supplementary imperfect demonstrations. In *The Tenth International Conference on Learning Representations*, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. +Kim, M. J., Pertsch, K., Karamcheti, S., Xiao, T., Balakrishna, A., Nair, S., Rafailov, R., Foster, E. P., Lam, G., Sanketi, P., Vuong, Q., Kollar, T., Burchfiel, B., Tedrake, R., Sadigh, D., Levine, S., Liang, P., and Finn, C. Openvla: An open-source vision-language-action model. CoRR, abs/2406.09246, 2024. doi: 10.48550/ARXIV.2406.09246. +Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. Large language models are zero-shot reasoners. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 22199-22213. Curran Associates, Inc., 2022. +Laskin, M., Yarats, D., Liu, H., Lee, K., Zhan, A., Lu, K., Cang, C., Pinto, L., and Abbeel, P. URLB: unsupervised reinforcement learning benchmark. In Vanschoren, J. and Yeung, S. (eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. +Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F., and Martius, G. Learning agile skills via adversarial imitation of rough partial demonstrations. In Conference on Robot Learning, pp. 342-352. PMLR, 2023. +Liero, M., Mielke, A., and Savare, G. Optimal entropy-transport problems and a new hellinger-kantorovich distance between positive measures. Inventiones mathematicae, 211, 03 2018. doi: 10.1007/s00222-017-0759-8. +Liu, J., He, L., Kang, Y., Zhuang, Z., Wang, D., and Xu, H. CEIL: generalized contextual imitation learning. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. +Luo, Y., Jiang, Z., Cohen, S., Grefenstette, E., and Deisenroth, M. P. Optimal transport for offline imitation learning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. + +Ma, Y. J., Shen, A., Jayaraman, D., and Bastani, O. Versatile offline imitation from observations and examples via regularized state-occupancy matching. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 14639-14663. PMLR, 2022. +Mendonca, R., Rybkin, O., Daniilidis, K., Hafner, D., and Pathak, D. Discovering and achieving goals via world models. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 24379-24391, 2021. +O'Neill, A., Rehman, A., and et. al. Open X-Embodiment: Robotic learning datasets and RT-X models. https://arxiv.org/abs/2310.08864, 2023. +P., M. V., Blaes, S., Pinneri, C., and Martius, G. Risk-averse zero-order trajectory optimization. In Faust, A., Hsu, D., and Neumann, G. (eds.), Conference on Robot Learning, 8-11 November 2021, London, UK, volume 164 of Proceedings of Machine Learning Research, pp. 444-454. PMLR, 2021. +Pan, X., Zhang, T., Ichter, B., Faust, A., Tan, J., and Ha, S. Zero-shot imitation learning from demonstrations for legged robot visual navigation. In 2020 IEEE International Conference on Robotics and Automation, ICRA 2020, Paris, France, May 31 - August 31, 2020, pp. 679-685. IEEE, 2020. doi: 10.1109/ICRA40945.2020.9196602. +Pathak, D., Mahmoudieh, P., Luo, G., Agrawal, P., Chen, D., Shentu, Y., Shelhamer, E., Malik, J., Efros, A. A., and Darrell, T. Zero-shot visual imitation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 2050-2053. Computer Vision Foundation / IEEE Computer Society, 2018. doi: 10.1109/CVPRW.2018.00278. +Pathak, D., Gandhi, D., and Gupta, A. Self-supervised exploration via disagreement. In ICML, 2019. +Peyre, G. and Cuturi, M. Computational optimal transport. Found. Trends Mach. Learn., 11(5-6):355-607, 2019. doi: 10.1561/2200000073. +Pineda, L., Amos, B., Zhang, A., Lambert, N. O., and Calandra, R. Mbrl-lib: A modular library for model-based reinforcement learning. *Arxiv*, 2021. + +Pinneri, C., Sawant, S., Blaes, S., Achterhold, J., Stueckler, J., Rolinek, M., and Martius, G. Sample-efficient cross-entropy method for real-time planning. In Kober, J., Ramos, F., and Tomlin, C. J. (eds.), 4th Conference on Robot Learning, CoRL 2020, 16-18 November 2020, Virtual Event / Cambridge, MA, USA, volume 155 of Proceedings of Machine Learning Research, pp. 1049-1065. PMLR, 2020. +Pirootta, M., Tirinzoni, A., Touati, A., Lazaric, A., and Ollivier, Y. Fast imitation via behavior foundation models. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. +Plappert, M., Andrychowicz, M., Ray, A., McGrew, B., Baker, B., Powell, G., Schneider, J., Tobin, J., Chociej, M., Welinder, P., Kumar, V., and Zaremba, W. Multi-goal reinforcement learning: Challenging robotics environments and request for research, 2018. +Pooladian, A., Domingo-Enrich, C., Chen, R. T. Q., and Amos, B. Neural optimal transport with lagrangian costs. CoRR, abs/2406.00288, 2024. doi: 10.48550/ARXIV.2406.00288. +Reddy, S., Dragan, A. D., and Levine, S. SQL: imitation learning via reinforcement learning with sparse rewards. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. +Reed, S. E., Zolna, K., Parisotto, E., Colmenarejo, S. G., Novikov, A., Barth-Maron, G., Gimenez, M., Sulsky, Y., Kay, J., Springenberg, J. T., Eccles, T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., Vinyals, O., Bordbar, M., and de Freitas, N. A generalist agent. Trans. Mach. Learn. Res., 2022, 2022. +Rubinstein, R. Y. and Kroese, D. P. The Cross Entropy Method: A Unified Approach To Combinatorial Optimization, Monte-carlo Simulation (Information Science and Statistics). Springer-Verlag, 2004. ISBN 978-0-387-21240-1. +Sancaktar, C., Blaes, S., and Martius, G. Curious exploration via structured world models yields zero-shot object manipulation. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. +Santi, R. D., Prajapat, M., and Krause, A. Global reinforcement learning : Beyond linear and convex rewards via + +submodular semi-gradient methods. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. +Schaul, T., Horgan, D., Gregor, K., and Silver, D. Universal value function approximators. In Bach, F. R. and Blei, D. M. (eds.), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 1312-1320. JMLR.org, 2015. +Sejourné, T., Feydy, J., Vialard, F., Trouve, A., and Peyre, G. Sinkhorn divergences for unbalanced optimal transport. CoRR, abs/1910.12958, 2019. +Shafullah, N. M., Cui, Z. J., Altanzaya, A., and Pinto, L. Behavior transformers: Cloning $\$ \mathbb{k}$ S modes with one stone. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. +Sinkhorn, R. and Knopp, P. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21:343-348, 1967. +Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA, 2018. ISBN 0262039249. +Tassa, Y., Doron, Y., Muldal, A., Erez, T., Li, Y., de Las Casas, D., Budden, D., Abdelmaleki, A., Merel, J., Lefrancq, A., Lillicrap, T., and Riedmiller, M. Deepmind control suite, 2018. +Tian, S., Nair, S., Ebert, F., Dasari, S., Eysenbach, B., Finn, C., and Levine, S. Model-based visual planning with self-supervised functional distances. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. +Torabi, F., Warnell, G., and Stone, P. Behavioral cloning from observation. In Lang, J. (ed.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pp. 4950-4957. ijcai.org, 2018. doi: 10.24963/IJCAI.2018/687. +Touati, A. and Ollivier, Y. Learning one representation to optimize all rewards. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 13-23, 2021. + +Touati, A., Rapin, J., and Ollivier, Y. Does zero-shot reinforcement learning exist? In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. +Wang, T., Torralba, A., Isola, P., and Zhang, A. Optimal goal-reaching reinforcement learning via quasimetric learning, 2023. +Wawrzyński, P. A cat-like robot real-time learning to run. In Kolehmainen, M., Toivanen, P., and Beliczynski, B. (eds.), Adaptive and Natural Computing Algorithms, pp. 380-390, Berlin, Heidelberg, 2009. Springer Berlin Heidelberg. ISBN 978-3-642-04921-7. +Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., and Le, Q. V. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. +Williams, G., Aldrich, A., and Theodorou, E. A. Model predictive path integral control using covariance variable importance sampling. CoRR, abs/1509.01149, 2015. +Yang, R., Lu, Y., Li, W., Sun, H., Fang, M., Du, Y., Li, X., Han, L., and Zhang, C. Rethinking goal-conditioned supervised learning and its connection to offline RL. In International Conference on Learning Representations, 2022. +Zhang, X., Becker-Ehmck, P., van der Smagt, P., and Karl, M. Action inference by maximising evidence: Zero-shot imitation from observation with world models. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. +Zolna, K., Novikov, A., Konyushkova, K., Gülçehre, C., Wang, Z., Aytar, Y., Denil, M., de Freitas, N., and Reed, S. E. Offline learning from demonstrations and unlabeled experience. CoRR, abs/2011.13885, 2020. + +# A. Additional Results + +# A.1. Main Result Details + +In tables 1 and 2 we provide detailed results for the figures 5 and 6. We also provide a summary of all planners we evaluated in figure 7. + +![](images/f45d0fc0de1b26d076c5d649ff662d7e0ab2b46001ff5d2f8c27ea6743c8543b.jpg) +Figure 7. Summarized performance of all discussed Planners. See table 1 and table 2 for detailed results. + +Table 1. Performance of $\mathrm{Pi} + \mathrm{Cls}$ , $\mathrm{MPC} + \mathrm{Cls}$ and ZILOT (ours) in all environments and tasks. Each metric is the mean over 20 trials, we then report the mean and standard deviation of those metrics across 5 seeds. We perform a Welch $t$ -test with $p = 0.05$ do distinguish the best values and mark them bold. Values are rounded to 3 and 2 digits respectively. + +
TaskERFBRERFBWmin↓Pi+ClsMPC+ClsZILOT (ours)ERFBRERFBGoalFraction ↑Pi+ClsMPC+ClsZILOT (ours)
fetch_pick_and_place-L-dense0.233±0.0170.233±0.0170.089±0.0270.109±0.0240.049±0.0190.17±0.030.17±0.030.65±0.110.58±0.070.88±0.07
fetch_pick_and_place-L-sparse0.170±0.0110.170±0.0110.112±0.0140.127±0.0220.092±0.0150.35±0.040.35±0.040.62±0.050.43±0.040.65±0.05
fetch_pick_and_place-S-dense0.183±0.0190.183±0.0190.113±0.0220.101±0.0220.049±0.0140.16±0.040.16±0.040.41±0.070.62±0.080.85±0.08
fetch_pick_and_place-S-sparse0.098±0.0080.098±0.0080.081±0.0170.091±0.0070.067±0.0060.33±0.080.33±0.080.57±0.060.50±0.040.70±0.06
fetch_pick_and_place-U-dense0.124±0.0210.124±0.0210.127±0.0070.116±0.0150.068±0.0050.21±0.080.21±0.080.47±0.100.60±0.030.70±0.02
fetch_pick_and_place-U-sparse0.163±0.0280.163±0.0280.142±0.0050.160±0.0080.098±0.0030.30±0.070.30±0.070.51±0.020.38±0.030.55±0.05
fetch_pick_and_place-all0.162±0.0100.162±0.0100.111±0.0070.117±0.0120.070±0.0090.25±0.040.25±0.040.54±0.020.52±0.020.72±0.04
fetch.push-L-dense0.246±0.0010.246±0.0010.056±0.0010.085±0.0180.041±0.0150.15±0.000.15±0.000.96±0.030.72±0.090.91±0.06
fetch.push-L-sparse0.184±0.0140.184±0.0140.101±0.0110.103±0.0100.082±0.0040.35±0.040.35±0.040.65±0.090.44±0.040.69±0.06
fetch.push-S-dense0.182±0.0190.182±0.0190.077±0.0240.104±0.0260.049±0.0100.25±0.050.25±0.050.83±0.090.70±0.080.87±0.08
fetch.push-S-sparse0.123±0.0100.123±0.0100.062±0.0040.077±0.0040.064±0.0060.36±0.080.36±0.080.90±0.070.65±0.040.72±0.06
fetch.push-U-dense0.141±0.0110.141±0.0110.102±0.0440.091±0.0090.065±0.0040.28±0.070.28±0.070.72±0.180.67±0.080.77±0.02
fetch.push-U-sparse0.195±0.0240.195±0.0240.106±0.0140.131±0.0120.109±0.0070.32±0.030.32±0.030.70±0.120.45±0.050.53±0.03
fetch.push-all0.178±0.0080.178±0.0080.084±0.0070.098±0.0100.068±0.0050.29±0.020.29±0.020.79±0.050.61±0.030.75±0.03
fetchslide_large_2D-L-dense0.282±0.0140.282±0.0140.258±0.0220.217±0.0340.074±0.0110.17±0.040.17±0.040.26±0.060.40±0.110.76±0.03
fetchslide_large_2D-L-sparse0.255±0.0070.255±0.0070.223±0.0140.185±0.0270.120±0.0110.38±0.050.38±0.050.47±0.100.70±0.050.73±0.04
fetchslide_large_2D-S-dense0.232±0.0290.232±0.0290.299±0.0060.254±0.0220.111±0.0100.19±0.050.19±0.050.21±0.100.31±0.060.51±0.07
fetchslide_large_2D-S-sparse0.215±0.0140.215±0.0140.266±0.0060.230±0.0210.086±0.0150.28±0.040.28±0.040.31±0.020.43±0.020.74±0.04
fetchslide_large_2D-U-dense0.225±0.0510.226±0.0510.214±0.0290.191±0.0450.076±0.0090.14±0.030.14±0.030.30±0.070.35±0.100.76±0.04
fetchslide_large_2D-U-sparse0.291±0.0490.294±0.0480.169±0.0430.150±0.0120.120±0.0050.30±0.040.29±0.040.36±0.090.53±0.040.70±0.06
fetchslide_large_2D-all0.250±0.0120.251±0.0120.238±0.0080.205±0.0200.098±0.0070.24±0.020.24±0.020.32±0.040.45±0.040.70±0.02
halfcheetah-backflip2.002±0.1492.002±0.1493.089±0.5884.281±0.3712.625±0.7800.44±0.080.44±0.080.28±0.130.12±0.120.57±0.17
halfcheetah-backflip-running2.853±0.1042.853±0.1042.879±0.4273.044±0.7522.171±0.4540.06±0.050.06±0.050.44±0.100.46±0.180.58±0.11
halfcheetah-frontflip1.286±0.0591.286±0.0591.544±0.1271.695±0.1471.295±0.0940.73±0.220.73±0.220.77±0.090.79±0.121.00±0.00
halfcheetah-frontflip-running2.137±0.2042.137±0.2042.086±0.1332.083±0.1041.955±0.0570.27±0.080.27±0.080.70±0.080.81±0.070.85±0.03
halfcheetah-hop-backward0.910±0.3160.910±0.3160.806±0.1100.950±0.0750.589±0.1070.65±0.190.65±0.190.96±0.030.90±0.020.96±0.03
halfcheetah-hop-forward1.418±0.3321.418±0.3321.580±0.0691.392±0.2061.101±0.1520.43±0.090.43±0.090.51±0.070.62±0.140.58±0.12
halfcheetah-run-backward0.667±0.0790.667±0.0790.897±0.0920.679±0.0350.489±0.1670.81±0.090.81±0.090.96±0.041.00±0.000.99±0.01
halfcheetah-run-forward0.712±0.0640.712±0.0640.857±0.0440.822±0.2060.376±0.0190.76±0.070.76±0.071.00±0.010.94±0.081.00±0.00
halfcheetah-all1.498±0.1051.498±0.1051.717±0.1011.868±0.0791.325±0.1230.52±0.030.52±0.030.70±0.050.71±0.020.82±0.02
pointmaze_medium-circle-dense1.128±0.2501.128±0.2500.243±0.0380.221±0.0210.156±0.0100.18±0.060.18±0.061.00±0.001.00±0.001.00±0.00
pointmaze_medium-circle-sparse1.483±0.4101.483±0.4100.385±0.0150.404±0.0250.466±0.0240.22±0.000.22±0.001.00±0.001.00±0.000.81±0.11
pointmaze_medium-path-dense0.900±0.3170.900±0.3170.275±0.0630.235±0.0230.199±0.0130.56±0.110.56±0.111.00±0.001.00±0.001.00±0.00
pointmaze_medium-path-sparse1.086±0.5051.086±0.5050.555±0.0800.511±0.0350.459±0.0150.48±0.100.48±0.101.00±0.001.00±0.000.97±0.03
pointmaze-medium-all1.149±0.1631.149±0.1630.365±0.0210.343±0.0230.320±0.0090.36±0.020.36±0.021.00±0.001.00±0.000.94±0.04
+ +Table 2. Performance of our method and its ablations in all environments and tasks. Each metric is the mean over 20 trials, we then report the mean and standard deviation of those metrics across 5 seeds. We perform a Welch $t$ -test with $p = 0.05$ do distinguish the best values and mark them bold. Values are rounded to 3 and 2 digits respectively. + +
Task\(W_{\min}\)GoalFraction ↑
ZILOT+hZILOT+ClsZILOT+UnbalancedZILOT (ours)ZILOT+hZILOT+ClsZILOT+UnbalancedZILOT (ours)
fetch_pick_and_place-L-dense0.214±0.0330.091±0.0110.052±0.0180.049±0.0190.26±0.100.68±0.040.84±0.070.88±0.07
fetch_pick_and_place-L-sparse0.188±0.0140.158±0.0040.095±0.0160.092±0.0150.40±0.010.35±0.020.65±0.080.65±0.05
fetch_pick_and_place-S-dense0.198±0.0420.089±0.0190.045±0.0060.049±0.0140.36±0.150.71±0.070.86±0.030.85±0.08
fetch_pick_and_place-S-sparse0.174±0.0290.115±0.0090.056±0.0080.067±0.0060.42±0.080.57±0.020.76±0.080.70±0.06
fetch_pick_and_place-U-dense0.237±0.0430.071±0.0060.060±0.0080.068±0.0050.17±0.100.74±0.040.75±0.040.70±0.02
fetch_pick_and_place-U-sparse0.229±0.0340.167±0.0040.101±0.0080.098±0.0030.34±0.040.33±0.050.54±0.050.55±0.05
fetch_pick_and_place-all0.207±0.0260.115±0.0070.068±0.0080.070±0.0090.32±0.060.56±0.020.73±0.050.72±0.04
fetch.push-L-dense0.211±0.0200.071±0.0060.040±0.0040.041±0.0150.27±0.060.73±0.020.91±0.030.91±0.06
fetch.push-L-sparse0.200±0.0220.150±0.0050.101±0.0140.082±0.0040.39±0.060.36±0.030.65±0.070.69±0.06
fetch.push-S-dense0.203±0.0460.077±0.0080.049±0.0100.049±0.0100.32±0.140.72±0.050.86±0.050.87±0.08
fetch.push-S-sparse0.197±0.0550.097±0.0060.060±0.0090.064±0.0060.40±0.170.56±0.020.78±0.060.72±0.06
fetch.push-U-dense0.228±0.0450.068±0.0070.058±0.0090.065±0.0040.20±0.100.78±0.040.81±0.030.77±0.02
fetch.push-U-sparse0.224±0.0470.136±0.0170.100±0.0070.109±0.0070.36±0.070.39±0.050.61±0.050.53±0.03
fetch.push-all0.211±0.0330.100±0.0060.068±0.0050.068±0.0050.32±0.080.59±0.020.77±0.030.75±0.03
fetchslide_large_2D-L-dense0.255±0.0220.098±0.0270.060±0.0090.074±0.0110.26±0.080.69±0.080.81±0.070.76±0.03
fetchslide_large_2D-L-sparse0.236±0.0200.181±0.0390.112±0.0160.120±0.0110.41±0.040.45±0.080.83±0.080.73±0.04
fetchslide_large_2D-S-dense0.256±0.0350.105±0.0110.091±0.0090.111±0.0100.23±0.100.63±0.030.59±0.100.51±0.07
fetchslide_large_2D-S-sparse0.272±0.0450.132±0.0330.084±0.0100.086±0.0150.28±0.070.52±0.080.79±0.040.74±0.04
fetchslide_large_2D-U-dense0.315±0.0510.087±0.0090.074±0.0110.076±0.0090.12±0.080.75±0.070.75±0.040.76±0.04
fetchslide_large_2D-U-sparse0.288±0.0580.147±0.0090.117±0.0080.120±0.0050.30±0.040.41±0.040.68±0.070.70±0.06
fetchslide_large_2D-all0.270±0.0250.125±0.0110.090±0.0050.098±0.0070.27±0.040.57±0.040.74±0.020.70±0.02
halfcheetah-backflip1.947±0.3123.170±0.7302.710±0.7422.625±0.7800.50±0.180.43±0.140.55±0.200.57±0.17
halfcheetah-backflip-running2.537±0.8102.479±0.2842.297±0.5252.171±0.4540.47±0.270.50±0.110.58±0.160.58±0.11
halfcheetah-frontflip1.172±0.0911.796±0.1731.330±0.1681.295±0.0940.96±0.030.52±0.030.98±0.031.00±0.00
halfcheetah-frontflip-running2.526±0.1102.091±0.2101.969±0.0751.955±0.0570.13±0.070.60±0.060.88±0.090.85±0.03
halfcheetah-hop-backward0.739±0.7360.889±0.1030.548±0.0560.589±0.1070.84±0.330.82±0.070.96±0.040.96±0.03
halfcheetah-hop-forward0.682±0.1201.070±0.0861.007±0.0941.101±0.1520.78±0.120.63±0.080.67±0.070.58±0.12
halfcheetah-run-backward0.555±0.4150.838±0.1390.473±0.1620.489±0.1670.92±0.110.68±0.030.99±0.010.99±0.01
halfcheetah-run-forward0.372±0.1560.742±0.0440.381±0.0260.376±0.0190.93±0.090.72±0.051.00±0.011.00±0.00
halfcheetah-all1.316±0.1811.634±0.0891.339±0.0901.325±0.1230.69±0.060.61±0.020.83±0.020.82±0.02
pointmaze_medium-circle-dense0.252±0.0320.651±0.3770.168±0.0150.156±0.0100.91±0.040.62±0.251.00±0.001.00±0.00
pointmaze_medium-circle-sparse0.465±0.0561.074±0.1150.465±0.0280.466±0.0240.87±0.030.41±0.100.83±0.100.81±0.11
pointmaze_medium-path-dense0.495±0.1301.835±1.0640.192±0.0080.199±0.0130.95±0.030.45±0.291.00±0.001.00±0.00
pointmaze_medium-path-sparse0.716±0.1191.416±0.8280.444±0.0100.459±0.0150.89±0.100.61±0.240.99±0.010.97±0.03
pointmaze_medium-all0.482±0.0551.244±0.4630.317±0.0080.320±0.0090.91±0.020.52±0.150.95±0.030.94±0.04
+ +# A.2. Finite Horizon Ablations + +As discussed in section 4, we are forced to optimize the objective over a finite horizon $H$ due to the imperfections in the learned dynamics model and computational constraints. The hyperparameter $H$ should thus be as large as possible, as long as the model remains accurate. We visualize this trade-off in figure 8 for environment fetch Slide large_2D. It is clearly visible that if the horizon is smaller than 16, the value we chose for our experiments, then performance rapidly deteriorates towards the one of the myopic planners. However, when increasing the horizon beyond 16, performance does not improve, suggesting that the model is not accurate enough to plan beyond this horizon. + +# A.3. Single Goal Performance + +When the expert trajectory consists of only a single goal, myopic planning is of course sufficient to imitate the expert. To verify this we evaluate the performance of all planners in the standard single goal task of the environments. Figure 9 shows the success rate of all planners in this task verifying that non-myopic planning neither hinders nor helps in this case. + +# A.4. Environment Similarity + +We evaluate ZILOT and the myopic baselines $\mathrm{Pi + Cls}$ and MPC+Cls on the walker environment (Tassa et al., 2018) in Table 3. Because this environment is very similar to the halfcheetah environment we use in our main evaluation, we can reuse the same goal-space, tasks, and data collection method. These similarities are also visible in the performance of the three methods. + +![](images/ede59f48eb9d060fc3f3597740cba89307c07c673effaae10ad60348424b3ca0.jpg) +Figure 8. Mean performance across five seeds in fetch-slide_large_2D for different planning horizons. + +![](images/84858d488d6d23cfa939fc74072f162df11e8934993899bf5c8eb9603ae7fe1a.jpg) + +![](images/9c3179255fa8c0f6e3d20ebd25b103acbc60bae046ca51868cef9debe1c92267.jpg) +Figure 9. Single Goal Success Rate in the standard single goal tasks of the environments. We report the mean performance across 20 trials and standard deviation across 5 seeds. + +Table 3. Evaluation on the walker environment with the halfcheetah results repeated for comparison. + +
Task\(W_{\text{min}} \downarrow\)GoalFraction ↑
Pi+ClsMPC+ClsZILOT (ours)Pi+ClsMPC+ClsZILOT (ours)
walker-backflip2.804±0.0561.737±0.1461.273±0.2050.34±0.070.89±0.030.92±0.06
walker-backflip-running3.039±0.2922.444±0.1891.709±0.0930.49±0.070.70±0.080.81±0.09
walker-frontflip2.688±0.4001.830±0.1851.551±0.0860.57±0.160.94±0.040.95±0.07
walker-frontflip-running2.597±0.2651.937±0.1721.921±0.1490.55±0.030.63±0.160.76±0.11
walker-hop-backward1.447±0.0760.872±0.0320.836±0.1000.64±0.110.78±0.060.84±0.07
walker-hop-forward0.932±0.0980.663±0.0710.467±0.0440.95±0.050.99±0.011.00±0.01
walker-run-backward1.290±0.1481.050±0.0860.957±0.1110.81±0.080.83±0.140.84±0.09
walker-run-forward1.180±0.1050.954±0.0790.672±0.0580.86±0.070.97±0.030.99±0.01
walker-all1.997±0.0471.436±0.0491.174±0.0610.65±0.030.84±0.020.89±0.04
halfcheetah-backflip3.089±0.5884.281±0.3712.625±0.7800.28±0.130.12±0.120.57±0.17
halfcheetah-backflip-running2.879±0.4273.044±0.7522.171±0.4540.44±0.100.46±0.180.58±0.11
halfcheetah-frontflip1.544±0.1271.695±0.1471.295±0.0940.77±0.090.79±0.121.00±0.00
halfcheetah-frontflip-running2.086±0.1332.083±0.1041.955±0.0570.70±0.080.81±0.070.85±0.03
halfcheetah-hop-backward0.806±0.1100.950±0.0750.589±0.1070.96±0.030.90±0.020.96±0.03
halfcheetah-hop-forward1.580±0.0691.392±0.2061.101±0.1520.51±0.070.62±0.140.58±0.12
halfcheetah-run-backward0.897±0.0920.679±0.0350.489±0.1670.96±0.041.00±0.000.99±0.01
halfcheetah-run-forward0.857±0.0440.822±0.2060.376±0.0191.00±0.010.94±0.081.00±0.00
halfcheetah-all1.717±0.1011.868±0.0791.325±0.1230.70±0.050.71±0.020.82±0.02
+ +# B. Forward-Backward Representations and Imitation Learning + +In a foundational paper in zero-shot, model-free IL, (Pirotta et al., 2024) propose several methods based on the forward-backward (FB) framework (Touati & Ollivier, 2021). FB trains two functions $F$ and $B$ , which recover a low-rank approximation of the successor measure, as well as a parameterized policy $(\pi_z)_{z \in \mathbb{R}^d}$ . These functions can be trained offline, without supervision, so that for each reward $r$ , an optimal policy $\pi_{z_r}$ can be recovered. This property gives rise to a range of reward-based and occupancy-matching based methods for zero-shot IL. In the following we will go over each method, and + +discuss how it differs from ZILOT in terms of objective. We will highlight how several methods do not directly apply to our setting, which involves expert demonstrations that are actionless, rough, and partial. We refer the reader to section C.10 for implementation details of baselines based on FB. + +# B.1. FB Imitation Learning Approaches + +Behavioral Cloning The first approach in Pirotta et al. (2024) is based on gradient descent on the latent $z$ to find the policy $\pi_z$ that maximizes the likelihood of expert actions. Since this approach strictly requires expert actions it does not apply in our case. + +Reward-Based Imitation Learning (Pirotta et al., 2024) derive two reward-based zero-shot IL methods maximizing the reward $r(\cdot) = \rho^{E}(\cdot) / \rho^{\mathcal{D}_{\beta}}(\cdot)$ (ERFB) (Ma et al., 2022; Kim et al., 2022) and its regularized counterpart $r(\cdot) = \rho^{E}(\cdot) / (\rho^{E}(\cdot) + \rho^{\mathcal{D}_{\beta}}(\cdot))$ (RERFB) (Reddy et al., 2020; Zolna et al., 2020). While ZILOT's objective is based on a Wasserstein distance, these rewards are derived from regularized $f$ -divergence objectives. These objectives are fortunately tractable, and can be minimized by solving an RL problem with additive rewards. In practice, this corresponds to assigning a scalar reward to each state visited by the expert, without considering the order of the states in the expert trajectory. However, as stated in Section 4.2 of Pirotta et al. (2024), this regularization comes at a cost, particularly if the state does not contain dynamical information, or in ergodic MDPs. In this case, a policy can maximize the reward by remaining in the most likely expert state, and the objective might be optimized by degenerate solution. On the other hand, such solution would be discarded by ZILOT, which uses an unregularized objective. + +Nonetheless, these two instantiations are fully compatible with partial and rough demonstrations. Thus, we provide an empirical comparison in Section 5. + +Distribution Matching A further approach in Pirotta et al. (2024) finds the policy $\pi_z$ whose occupancy matches the expert occupancy w.r.t. different distances on the space of measures. ZILOT also performs occupancy matching, but with respect to Wasserstein distances. However, ZILOT is designed to handle state abstraction, i.e. partial states. To the best of our understanding, distribution- and feature-matching flavors of FB-IL require the demonstration to contain full states, unless further FB representations are trained to approximate successor measures over abstract states. While the standard implementation of distribution-matching FB-IL cannot imitate rough demonstrations, we believe that an extension in this direction may be interesting for future work. + +Goal-Based Imitation Pirotta et al. (2024) also instantiate a hierarchical, goal-based imitation method, in which the FB framework is only used for goal-reaching. This idea is closely related with one of our baselines $(\mathrm{Pi} + \mathrm{Cls})$ . However, their framework assumes that trajectories to imitate are not rough and, instead of using a classifier, the goal can be chosen at a fixed offset of in time for each time-step. In any case, their approach remains myopic as per Proposition 3.1. Empirically, Pirotta et al. (2024) observe that this instantiation of FB-IL does not significantly outperform an equivalent method relying on TD3+HER instead. As the latter method is very similar to our $\mathrm{Pi} + \mathrm{Cls}$ baseline, we do not investigate this approach further in this work. + +# C. Implementation Details + +# C.1.ZILOT + +The proposed method is motivated and explained in section 4. We now present additional details. + +Sinkhorn First, we rescale the matrix $C$ by $T_{\mathrm{max}}$ and clamp it to the range [0, 1] before running Sinkhorns algorithm. The precise operation performed is + +$$ +\boldsymbol {C} \leftarrow \min \left(1, \max \left(0, \boldsymbol {C} / T _ {\max }\right)\right). \tag {19} +$$ + +This is done so that the same entropy regularization $\epsilon$ can be used across all environments, and to ensure there are no outliers that hinder the convergence of the Sinkhorn algorithm. For the algorithm itself, we use a custom implementation for batched OT computation, heavily inspired by (Flamary et al., 2021) and (Cuturi et al., 2022). We run our Sinkhorn algorithm for $r = 500$ iterations with a regularization factor of $\epsilon = 0.02$ . + +Truncation When the agent gets close to the end of the expert trajectory, then we might have that $t_K < k + H$ , i.e. the horizon is larger than needed. We thus truncate the planning horizon to the estimated remaining number of steps (and at least 1), i.e. we set + +$$ +H _ {\text {a c t u a l}} \leftarrow \max \left(1, \min \left(t _ {K} - k, H\right)\right). \tag {20} +$$ + +Unbalanced OT As mentioned in the main text in section 5.3, we can use unbalanced OT (Liero et al., 2018; Séjourné et al., 2019) to address that fact that the uniform marginal for the goal occupancy approximation may not be feasible. Unbalanced OT replaces this hard constraint of $T^{\top} \cdot \mathbf{1}_N = \mathbf{1}_M$ into the term $\xi_b \mathrm{KL}(T^\top \cdot \mathbf{1}_N, \mathbf{1}_M)$ in the objective function. For our experiments we have chosen $\xi_b = 1$ . + +# C.2. TD-MPC2 Modifications + +As TD-MPC2 (Hansen et al., 2024) is already a multi-task algorithm that is conditioned on a learned task embedding $t$ from a task id $i$ , we only have to switch out this conditioning to a goal latent $z_{g}$ to arrive at a goal-conditioned algorithm as detailed in table 4. We remove the conditioning on the encoders and the dynamics model $f$ completely as the goal conditioning of GC-RL only changes the reward but not the underlying Markov Decision Process $\mathcal{M}$ (assuming truncation after goal reaching, see section 2.3). For training we adopt all TD-MPC2 hyperparameters directly (see table 7). As mentioned in the main text, we also train a small MLP to predict $W$ that regresses on $V$ . + +Table 4. Our modifications to TD-MPC2 to making it goal- instead of task-conditioned. + +
TD-MPC2 (Hansen et al., 2024)“GC”-TD-MPC2 (our changes)
Task/Goal Embeddingt = E(i)zg = hg(g)
Encoderz = h(s,t)z = h(s)
Dynamicsz' = f(z,a,t)z' = f(z,a)
Reward Predictionr = R(z,a,t)r = R(z,a,zg)
Q-functionq = Q(z,a,t)q = Q(z,a,zg)
Policya ~ π(z,t)a ~ π(z,zg)
+ +We have found the computation of pair-wise distances $d$ to be the major computational bottleneck in our method, as TD-MPC2 computes them as $d = -V^{\pi}(s,g) = -Q(z,\pi (z,z_{g}),z_{g})$ where $z = h(s),z_{g} = h_{g}(g)$ . To speed-up computation, we train a separate network that estimates the value function directly. It employs a two-stream architecture (Schaul et al., 2015; Eysenbach et al., 2022) of the form $V^{\pi}(z,z_g) = \phi (z)^{\top}\psi (z_g)$ where $\phi$ and $\psi$ are small MLPs for fast inference of pair-wise distances. + +Our GC-TD-MPC2 is trained like the original TD-MPC2 with two losses additionally employing HER (Andrychowicz et al., 2017) to sample goals $g$ which we discuss in detail in Appendix C.4. The first loss combines a multi-step loss for $d$ and $h$ with a single-step TD-step loss for $R$ and $Q$ + +$$ +\mathcal {L} = \mathbb {E} _ {\substack {(s, a, s ^ {\prime}) _ {0: H} \sim \mathcal {D} \\ g _ {0: H} \sim \mathrm {H E R} _ {\gamma} (s _ {0: H})}} \left[ \sum_ {t = 0} ^ {H} \lambda^ {t} \left(\| z _ {t} ^ {\prime} - \operatorname {s g} \left(h \left(s _ {t} ^ {\prime}\right)\right) \| _ {2} ^ {2} + \operatorname {C E} \left(R \left(z _ {t}, a _ {t}, z _ {g _ {t}}\right), r _ {t}\right) + \operatorname {C E} \left(Q \left(z _ {t}, a _ {t}, z _ {g _ {t}}\right), q _ {t}\right)\right) \right] \tag{21} +$$ + +where $\mathrm{sg}$ is the "stop-gradient"-operator, $z_{t}, z_{g_{t}}$ , and $z_{t}^{\prime}$ are defined in Table 4, rewards are $r_{t} = \mathbb{I}_{s_{t} = g_{t}} - 1$ , and (undiscounted) TD-targets are $q_{t} = \max(r_{t} + \mathbb{I}_{s_{t} \neq g_{t}} \cdot \overline{Q}(z_{t}^{\prime}, \pi(z_{t}^{\prime}, z_{g_{t}}), z_{g_{t}}), -T_{\max})$ . The second loss is a SAC-style loss for $\pi$ (Haarnoja et al., 2018) + +$$ +\mathcal {L} _ {\pi} = \mathbb {E} _ {\substack {(s, a, s ^ {\prime}) _ {0: H} \sim \mathcal {D} \\ g _ {0: H} \sim \mathrm {H E R} _ {\gamma} (s _ {0: H})}} \left[ \sum_ {t = 0} ^ {H} \lambda^ {t} (\alpha Q (z _ {t}, \pi (z _ {t}, z _ {g _ {t}}), z _ {g _ {t}}) + \beta \mathcal {H} (\pi (\cdot | z _ {t}, z _ {g _ {t}})) \right], z _ {t} = d (z _ {t}, a _ {t}), z _ {0} = h (s _ {0}). \tag{22} +$$ + +Additional to GC-TD-MPC2 we then also train our goal-conditioned value function $V^{\pi}(z,z_{g}) = \phi (z)^{\top}\psi (z_{g})$ using the same TD-targets as in Equation (21) + +$$ +\mathcal {L} _ {\phi , \psi} = \mathbb {E} _ {g _ {0: H} \sim \mathrm {H E R} _ {\gamma} \left(s _ {0: H}\right)} \left[ \sum_ {t = 0} ^ {H} \lambda^ {t} \left(\phi \left(z _ {t}\right) ^ {\top} \psi \left(z _ {g _ {t}}\right) - q _ {t}\right) ^ {2} \right]. \tag {23} +$$ + +# C.3. Runtime + +ZILOT runs at 2 to $4\mathrm{Hz}$ on an Nvidia RTX 4090 GPU, depending on the size of $H$ and the size of the OT problem. Given that the MPC+Cls method runs at around 25 to $72\mathrm{Hz}$ with the same networks and on the same hardware, it is clear that most computation is spent on preparing the cost-matrix $C$ and running the Sinkhorn solver. Several further steps could be taken to speed-up the Sinkhorn algorithm itself, including $\eta$ -schedules and/or Anderson acceleration (Cuturi et al., 2022) as well as warm-starting it with potentials, e.g. from previous (optimizer) steps or from a trained network (Amos et al., 2023). + +# C.4. Goal Sampling + +As mentioned in the main text, we follow prior work (Andrychowicz et al., 2017; Bagatella & Martius, 2023; Tian et al., 2021) and sample goals from the future part of trajectories in $\mathcal{D}_{\beta}$ in order to synthesize rewards without supervision. The exact procedure is as follows: + +- With probability $p_{\text{future}} = 0.6$ we sample a goal from the future part of the trajectory with time offset $t_{\Delta} \sim \operatorname{Geom}(1 - \gamma)$ . +- With probability $p_{\mathrm{next}} = 0.2$ we sample the next goal in the trajectory. +- With probability $p_{\mathrm{rand}} = 0.2$ we sample a random goal from the dataset. + +# C.5. Training + +We train our version of TD-MPC2 offline with the datasets detailed in table 5 for 600k steps. Training took about 8 to 9 hours on a single Nvidia A100 GPU. Note that as TD-MPC2 samples batches of 3 transitions per element, we effectively sample $3 \cdot 256 = 768$ transitions per batch. The resulting models are then used for all planners and experiments. + +Table 5. Environment description. We detail the datasets used for training. + +
EnvironmentDataset#Transitions
fetch.pushWGCSL (Yang et al., 2022) (expert+random)400k + 400k
fetch_pick_and_placeWGCSL (Yang et al., 2022) (expert+random)400k + 400k
fetch Slide large_2Dcustom (curious exploration (Pathak et al., 2019))500k
halfcheetahcustom (curious exploration (Pathak et al., 2019))500k
pointmaze_mediumD4RL (Fu et al., 2021) (expert)1M
+ +# C.6. Environments + +We provide environment details in table 6. Note that while we consider an undiscounted setting, we specify $\gamma$ for the goal sampling procedure above. + +Table 6. Environment details. We detail the goal abstraction $\phi$ , metric $h$ , threshold $\epsilon$ , horizon $H$ , maximum episode length $T_{\mathrm{max}}$ , and discount factor $\gamma$ used for each environment. + +
EnvironmentGoal Abstraction φMetric hThreshold εHorizon HTmaxγ
fetch.push(x,y,z)cube||·||20.0516500.975
fetch_pick_and_place(x,y,z)cube||·||20.0516500.975
fetch_slide_large_2D(x,y,z)cube||·||20.0516500.975
halfcheetah(x,θy)||·||20.50322000.990
pointmaze_medium(x,y)||·||20.45646000.995
+ +The environments fetch.push and fetch_pick_and_place and pointmaze_medium are used as is. As halfcheetah is not goal-conditioned by default, we define our own goal range to be $(x,\theta_y)\in [-5,5]\times [-4\pi ,4\pi ]^3$ . fetch Slide large_2D is a variation of the fetch Slide environment where the table size exceeds the arm's range and the arm is restricted to two-dimensional movement touching the table. + +# C.7. Tasks + +The tasks for the fetch and pointmaze environments are specified in the environments normal goal-space. Their shapes can be seen in the figures in appendix E. To make the tasks for halfcheetah more clear, we visualize some executions of our method in the figures 10, 11, 12, 13, 14, and 15. + +![](images/2eb4a3015334be85a5e43efcdee6c281205ab22af77d2d6e2df74ef2151c7354.jpg) +Figure 10. Example trajectory of ZILOT (ours) in halfcheetah-backflip-running. + +![](images/45a458b6a1877f4954d5f1258bf3a6716eb4c9b3599a003053bb37f73640f75f.jpg) +Figure 11. Example trajectory of ZILOT (ours) in halfcheetah-backflip. + +![](images/e6b7b236c7374ab54f9e2804ef37cb884846fcd802a1a0935b0b26ee376d65c6.jpg) +Figure 12. Example trajectory of ZILOT (ours) in halfcheetah-frontflip-running. + +![](images/a83b1484381ec300e789b71c8a110f540d66604e95098b5aeec28bd82a8c91e1.jpg) +Figure 15. Example trajectory of ZILOT (ours) in halfcheetah-hop-forward. + +![](images/afb9573dc88a4136ef7a8ff46b0924c5a23879c5fb0d4bb98baaa56e82294d7b.jpg) +Figure 13. Example trajectory of ZILOT (ours) in halfcheetah-frontflip. + +![](images/225a3dc02dd8781e698aebe7cce6c3aeda9d4f02407496d73cb710c9e0503e5c.jpg) +Figure 14. Example trajectory of ZILOT (ours) in halfcheetah-hop-backward. + +# C.8. Task Difficulty + +This section investigates the ability of ZILOT to imitate trajectories that do not appear in the offline dataset it is trained on. As ZILOT uses a learned dynamics model and an off-policy value function, it should in theory be able to stitch together any number of trajectories in the dataset. To get some qualitative intuition we overlay the following: first, a kernel density estimate of the data distribution in the offline datasets, second, an expert trajectory to imitate, and finally the five trajectories that are closest to the expert w.r.t. the Wasserstein distance under the goal-metric $h$ . We present a few tasks for each environment in Figures 16, 18, 19, 17, and 20. + +Comparing the density estimates and the expert trajectories, we can see that essentially all expert trajectories are within distribution. Although, especially in halfcheetah, there are some tasks, such as hop-forward and backflip-running with very little coverage which might explain the bad performance of all planners in these tasks (see table 1). Comparing the selected trajectories with the expert trajectory, it is also evident that the expert demonstrations are not directly present in the datasets. Thus, ZILOT is capable of imitating unseen sequences of states, as long as each individual state is within the support of the training data. In other words, ZILOT is capable of off-policy learning, or trajectory stitching. + +![](images/67b686b36c6273fa8c998cc9a8d8c2c04d58dbb20f3315b92e22b14371f1118c.jpg) +(a) hop-forward + +![](images/8cb235d245199e5b55dc95b242a5e75c36cda12c0900616019130d6cc96d7380.jpg) +(b) hop-backward + +![](images/43000f6e0683bcf207a2bcf903e807c92f1a1b9595043119d14788d3759c3ef3.jpg) +(c) backflip + +![](images/15744608ac872a0dad3791dffb601aa6c823ee1028118aa9d814953dade611cb.jpg) +(d) frontflip + +![](images/17d4418710ae70694e062ebc1522b3a24d31f9ecc1cc8b8b90251410ef653678.jpg) +(e) frontflip-running + +![](images/35767e2bc4bc0d11dca7cae3279625a564684a7c12187feccbfdcdf741257169.jpg) +(f) backflip-running +Figure 16. The 5 trajectories (blue) from the dataset that are closest to the expert trajectory in different halfcheetah tasks (orange) overlayed over a kernel density estimate of the goal occupancy in the full training dataset. + +![](images/fb415897016b0e41153e86f4d6186cfbff00481f258c1abb4a93fb6ff3948ec7.jpg) +(a) L-dense + +![](images/ceaeed960779e5e51fb9e3418c310b30b09853d87141af7b5fcc03b714cfced7.jpg) +(b) U-dense +Figure 17. The 5 trajectories (blue) from the dataset that are closest to the expert trajectory in different fetch Slide large 2D tasks (orange) overlayed over a kernel density estimate of the goal occupancy in the full training dataset. + +![](images/9c322b5df5a48aacb15620042951eb4768c45f699f59d47b29332d2d91e2b34d.jpg) +(c) S-dense + +![](images/ecd05671857607a90f7b53351821adf4efaa746cbe91a3629eb1622e8d041c9b.jpg) +(a) L-dense + +![](images/5e72c4ae5672a91faadd6ca83635d1160cbeecd2d4f4e049b45fda43e4e512d0.jpg) +(b) U-dense + +![](images/92f4db0658891f9d9f2eebc6bd5941e3dd84122b0d314030fb74d70897731811.jpg) +(c) S-dense + +![](images/7e63742380683f324ee23ab8e7d67c63189c27ef54fb2665df80afc1a903aea0.jpg) +Figure 18. The 5 trajectories (blue) from the dataset that are closest to the expert trajectory in different fetch-push tasks (orange) overlayed over a kernel density estimate of the goal occupancy in the full training dataset. +(a) L-dense +Figure 19. The 5 trajectories (blue) from the dataset that are closest to the expert trajectory in different fetch_pick_and_place tasks (orange) overlayed over a kernel density estimate of the goal occupancy in the full training dataset. + +![](images/fe51565599cba5566bb21937fb5e19019559d6c6b9aba6e87b5ad06851509fc4.jpg) +(b) U-dense + +![](images/83c8662078dcf6970a72e2698f388c09359e2c68d70a6078653f1d10d5c073ce.jpg) +(c) S-dense + +![](images/d8f634fbcf17ab146876ebf30151f2d2d8d8dd9dbe7447380fd80b479092cf5c.jpg) +(a) circle-dense + +![](images/8ee1e0b4292dbb6f2bf29410ff7537760818c6f8ea60c0a465d59642b79729d7.jpg) +(b) path-dense +Figure 20. The 5 trajectories (blue) from the dataset that are closest to the expert trajectory in different pointmaze medium tasks (orange) overlayed over a kernel density estimate of the goal occupancy in the full training dataset. + +# C.9. Hyperparameters + +Table 7. TD-MPC2 Hyperparameters. We have adopted these unchanged from Hansen et al. (2024) + +
NameValue
lr3e-4
batch_size256
n_steps ("horizon")3
rho0.5
gradclip_norm20
enc_lr_scale0.3
value_coeff0.1
reward_coeff0.1
consistency_coeff20
tau0.01
log_std_min-10
log_std_max2
entropy_coeff1e-4
+ +
NameValue
num_bins101
vmin-10
vmax10
num_enc_layers2
enc_dim256
num_channels32
mlp_dim512
latent_dim512
bin_dim12
num_q5
dropout0.01
simnorm_dim8
+ +Table 8. Hyperparameters used for iCEM (Pinneri et al., 2020). We use the implementation from Pineda et al. (2021). +(a) ICEM hyperparameters for all MPC planners. + +
NameValue
num_iterations4
population_size512
elite_ratio0.01
population Decay_factor1.0
colored_noise_exponent2.0
keep_alite_frac1.0
alpha0.1
+ +(b) ICEM hyperparameters for curious exploration. + +
NameValue
num_iterations3
population_size512
elite_ratio0.02
population Decay_factor0.5
colored_noise_exponent2.0
keep_alite_frac1.0
alpha0.1
horizon20
+ +# C.10. FB Implementation Details + +Since there is no implementation available for FB-IL directly, we have adopted the code for FB (Touati & Ollivier, 2021) according to the architectural details in appendix D.3 and the hyperparameters in appendix D.4 of FB-IL (Pirotta et al., 2024). The main architectural changes consisted of changing the state input of the $B$ networks to only a goal input, as suggested in (Touati & Ollivier, 2021) as well as adding a last layer in the $B$ networks for L2 projection, batch normalization, or nothing, depending on the environment. + +We follow the specifications of Pirotta et al. (2024) whenever possible. As halfcheetah and maze are also used in their evaluations we have adopted their hyperparameters for these environments. For our fetch environments, we used the hyperparameters most common in the environments except for the discount $\gamma$ which we adjusted to 0.95 to account for the shorter episode length. Finally, we have found that the FB framework seems to be ill-adjusted to be trained on an order of magnitude less data than in the original experiments (Touati & Ollivier, 2021; Pirotta et al., 2024). For some environments, performance started to deteriorate rather quickly, so we to report the best performance encountered during training when evaluating every 50k steps (see D.2). We provide the full set of hyperparameters in table 9. + +Table 9. Hyperparameters used for FB-IL training. Closely follows table 1 in appendix D.4 of (Pirotta et al., 2024) for halfcheetah and maze. + +
Environmentfetchhalfcheetahmaze
Representation dimension5050100
Batch size204820481024
Discount factor γ0.950.980.99
OptimizerAdamAdamAdam
learning rate of F10-410-410-4
learning rate of B10-410-410-6
learning rate of π10-410-410-6
Normalization of BL2NoneBatchnorm
Momentum for target networks0.990.990.99
Stddev for policy smoothing0.20.20.2
Truncation level for policy smoothing0.30.30.3
Regularization weight for orthonormality111
Numer of training steps2·1062·1062·106
+ +# D. Hyperparameter Searches + +# D.1. Classifier Threshold + +As mentioned in the main text, we perform an extensive hyperparameter search for the threshold value of the goal classifier (Cls) for the myopic methods $\mathrm{Pi + Cls}$ and $\mathrm{MPC + Cls}$ as well as for the ablation of our method ZILOT+Cls. In figures 22 and 21 we show the performance of the three respective planners in all five environments and denote the threshold values that yield the best performance per environment. Interestingly, in some of the fetch environments not all tasks attain maximum performance with the same threshold value showing that this hyperparameter is rather hard to tune. + +![](images/8900030c1720e5a6cb25728716cab1f1fb35f142be35a83d793d7c726a6e2b9b.jpg) +(a) fetch_pick_and_place + +![](images/4122f63e15f685eebf678fc0ff99c7f83878546dbcba40b62da8b9cd8cb2eca5.jpg) +(b) fetch.push + +![](images/e15b1d8dea87fca673cf9f6e538fa4aaeddc27377566822f9491d8c001e997e7.jpg) +(c) fetch_slide_large_2D + +![](images/5a128b4f21e7857faf3bb628615b85704e1ae85cea5862c9851df322d8979014.jpg) +(d) pointmaze_medium + +![](images/3504b252b84f3c571d7040e1bcb0868e9b7d9094012cc32cae5135b76854e89a.jpg) +(e) halfcheetah +Figure 21. ZILOT+Cls hyperparameter search for Cls threshold. + +![](images/70811f81ade8c7352a3ebc2ff959292ccb3a445d13107c3254b40c08cb266052.jpg) +- Best $W_{\min}$ +- Best GoalFraction +- S-dense + +![](images/f645cb2253c73a3f93559e0e80e3e6db68e2362d191564e55b0cddd118c51a79.jpg) +-sparse U-dense U-dense U-sparse + +![](images/ae5d07f0fc5c4f1cbe3ca70123210e85b862dd9ade8204938f5f73654bef067b.jpg) +(a) $\mathrm{Pi} + \mathrm{Cls}$ for fetch_pick_and_place + +![](images/177130a7ec32b92f31adeae72aee3c17ba6df23e87d5e4ed8e8e29f781cd9f58.jpg) +(c) $\mathrm{Pi + Cls}$ for fetch\_push + +![](images/c0a0189c27bf10bc1bd0c51e73ca232712d595955542d92e7cba6b318f6b1003.jpg) +(e) Pi+Cls for fetch_slide_large_2D + +![](images/9d7bb1f5a979d7c525f29454ceb786461d48a647079fbf03b7c5116d5200d123.jpg) + +![](images/dcb4ce0cd0ad69597d7f58a9b2477a1970fd8f2c7f814fad3f550e292300d9ae.jpg) +(g) $\mathrm{Pi} + \mathrm{Cls}$ for halfcheetah + +![](images/bb5847505b6cb34726f47d495fd8b0a3dbb01649011b64b6de7e04f6ee757d46.jpg) + +![](images/1bde3d29486204a02b4e9ac7c6e5c71e54359ef7763858a8deca64e1f3d74fe9.jpg) +(i) $\mathrm{Pi} + \mathrm{Cls}$ for pointmaze_medium + +![](images/4863ef0f674e42a055e3672d2226fa5b1e730d060070445806683c90c6c23144.jpg) +Figure 22. $\mathrm{Pi + Cls}$ and $\mathrm{MPC + Cls}$ hyperparameter searches for CIs threshold in each environment. + +![](images/c0b29282dab1814bc90d75f6bc478affa5b1b607938b11b0239a7ed4637d300d.jpg) +- Best $W_{\mathrm{min}}$ +- Best GoalFraction +- S-dense + +![](images/cea1556fb0ad8e00ea7c223ba229d3add010c7df2dfc01e3164d7bbe92af32c1.jpg) +S-sparse L-dense L-sparse +U-dense U-sparse + +![](images/feb2b91ed0df4a80f00c871325f63ad59c8b164be1062399125ce72e8b28c3c9.jpg) +(d) fetch.push + +![](images/ad5aef853803356271361f60d3594e4e10e44e1d6c103ddff9628e3b18e6eff3.jpg) +(b) MPC+Cls for fetch_pick_and_place + +![](images/e047a14f8f8673f46289e0935f4b365f90390e5a2f3e2f41e6fb4be405ac7a8e.jpg) +(f) MPC+Cls for fetch_slide_large_2D + +![](images/dc9a263997e1220777a1fb47a025241287e4a6819628f20a4306460b70d79f25.jpg) + +![](images/8e537fa2c6fbec6c5033202be091c1766d311d0a8e4ed1b3b09834cdaddd661d.jpg) +(h) MPC+Cls for halfcheetah + +![](images/727be805be65e174548b9172e830916567c56d10bf0d29785282b4edf52fdc4b.jpg) + +![](images/1580ec1c9b258206059c79be7d67c1a3ea492ba199a6ae725e2ae37e25e53495.jpg) +(j) MPC+Cls for pointmaze_medium + +![](images/39d6a61aa03b24343bedf348fe155e1b0a28e66a07eadc025539193b2770c318.jpg) + +# D.2. FB-IL Training steps. + +As mentioned in section C.10 we report the best evaluation results of all FB methods that occur during training. In figures 23 and 24 we report the evaluation performance for different training lengths. + +![](images/bb073c1590029d8b34dec28f66a9b1d5f58c8576677a1856a8a644baf2112548.jpg) +(a) fetch_pick_and_place + +![](images/e0b890647b1d4466986495ed4e6388904dd4a0d3e5f4f7dd22fae2ba246dc730.jpg) +(b) pointmaze_medium + +![](images/7263df4732bd96a2d2843c44c5b2573a82f23f22fdad967e19d24a3267ccfbfc.jpg) +(c) fetch SlideLarge_2D + +![](images/d3eb14efe17cee147e487ba573e59f1591dabfba599d6c3f222a8a00753e3b5b.jpg) +(d) halfcheetah + +![](images/3ac96f897c89bdb1c8b2e6d8846b3fd08047758a3578b914a50ac06d92c36feb.jpg) +(e) fetch.push +Figure 23. $\mathrm{ER_{FB}}$ evaluation performance during training. + +![](images/7b169c8e738e8a5937c491029a6d86225127ccdb9828131991a93053f42243a2.jpg) +(a) fetch_pick_and_place + +![](images/0cedef4904e425291c479d2d40dcb2bb8b7d61093cd5262fae201209f8f46d18.jpg) +(b) pointmaze_medium + +![](images/7af5275982e1d9b4bb3587564b69ff5f55e4a3b79b3557f72050ed62c5409294.jpg) +(c) fetch SlideLarge_2D + +![](images/a0391b0b199482cab2b1fc5bb8cd2ccf0da60f68d6c7eef0fb931c911e63e0e8.jpg) +(d) halfcheetah + +![](images/90a5b66b2664232af289b6962bfe1e27bce70cf489280cfba51e82ab87df3e90.jpg) +(e) fetch.push +Figure 24. $\mathrm{RER}_{\mathrm{FB}}$ evaluation performance during training. + +# E. Additional Qualitative Results + +In the following, we present all goal-space trajectories across all planners, tasks, and seeds presented in this work. Note that since the tasks of the fetch environments display some natural symmetries, we decided to split evaluations between all four symmetrical versions of them. Further, we quickly want to stress that these trajectories are shown in goal-space. This means that if the cube in fetch is not touched, as is the case in some cases for ZILOT $+h$ , then the trajectory essentially becomes a single dot at the starting position. Also note that $\mathrm{Pi} + \mathrm{Cls}$ is completely deterministic, which is why its visualization appears to have less trajectories. + +![](images/fcfb8f07e5005021e0cd8ace36fd4042871730ee77f3be6f86f26369b9521623.jpg) +(a) U-dense +Figure 25. fetch_pick_and_place + +![](images/1535bfdf92315957bbb9634a290ce637136aa65452680143f337c6908f62fa0e.jpg) +(b) U-sparse + +![](images/afb320c1182518e8eaf915ae9e7cb860db0560be649bc3ba7815c793d33809fa.jpg) +Figure 26. fetch_pick_and_place + +![](images/dcdc40e6a36a7ad8436d6c0c57394b45f61fdfa60ea04ad1f8031b524c5627b5.jpg) + +![](images/2925fbda23054e97dbada2b636991511b35824078b8dc427fd68837cd5ecead3.jpg) +Figure 27. fetch_pick_and_place + +![](images/dc8f6b27ce3497251b05214bdedb60dff5059c9e9688462a60b88231cf394a4b.jpg) + +![](images/7ebae2dcac1bb23c334225835374a64d49d667f2a35cbace7533dadfb3b7d157.jpg) +(a) U-dense + +![](images/e5792f6df72d527d04d12af6138cc04c642ee737776580ff3a1d2bd6d622100b.jpg) +(b) U-sparse + +![](images/2fdeeb3ee0cc3c17c75a1681cb8aeb80ca88193707b3edcf1d298184d2827e66.jpg) +(a) L-sparse +Figure 29. fetch Slide large_2D + +![](images/98d9ba4e58f83cd08fa58e7574e31ad571a5a399b10b28e240eb89c8ee2ef73e.jpg) +Figure 28. fetch_slide_large_2D +(b) L-dense + +![](images/3d35890472154a89060b3144f197b9d6240896ffe5efcba319458e14efef314b.jpg) +(a) S-dense + +![](images/43b9a3a180d17fae4980ce0b579b18afb27482b5ca1a1d577fc8f1918cfcce7c.jpg) +(b) S-sparse + +![](images/e0cbcca4e4317e1fc337d8b1ab16255969a8b41098ae7bb95e855847630d0473.jpg) +(a) U-dense +Figure 31. fetch.push + +![](images/299f3d4a0a8ebe2285c5efffb31fe283fc9f949587752bc9c807befa6b313703.jpg) +Figure 30. fetch_slide_large_2D +(b) U-sparse + +![](images/05cb8d20c9f87b57cda21050cd192c7dacebf1c27184f9b957070d139f3cd03a.jpg) +(a) L-sparse + +![](images/d5e177f6f51c92a2158aaad299bcf38a38f4f03a662589bb9dfcff869f84b5d8.jpg) +(b) L-dense + +![](images/2b83f731236237cf6aa0bec6d4fd16f01baaa0e1dd7692605990f41ac00b5e41.jpg) +(a) S-dense +Figure 33. fetch.push + +![](images/c1263f3135e63b7dd835a60035fe4f4bdbc37d8768233d273d27d02f5ae9497b.jpg) +Figure 32. fetch.push +(b) S-sparse + +![](images/7434212e83353e7e2dc981d9a47e92db1ba7a6372b2a54c2c3d2ceb3c92d2c3c.jpg) +Figure 34. pointmaze_medium + +![](images/fee0df9a3e81dbb7d3c848a78e843c4ff5bff2a9a59df02793bb07ad1a9a318d.jpg) +Figure 35. halfcheetah part 1 + +![](images/3b4817c3857630c7f2a7ec83ddbaad0035613ec7ead2a36822a643e60d4fb092.jpg) +Figure 36. halfcheetah part 2 \ No newline at end of file diff --git a/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/images.zip b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ed908b3edcc2815ea758ae5792817f80d8ca2d5b --- /dev/null +++ b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97e5521f171279ce0e317e949fb233bf151accdd9092d69698980e95f9a8bd13 +size 3950533 diff --git a/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/layout.json b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..63031c1a152ea40da66019a678970fe6e5a67891 --- /dev/null +++ b/ICML/2025/Zero-Shot Offline Imitation Learning via Optimal Transport/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9f297b595680fa265a2179f78fe2da941d3dbc8fae73775c112a420592f911d +size 1153045 diff --git a/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_content_list.json b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c95294c6adb92d9c08942a50fc7b9d818cfd2615 --- /dev/null +++ b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f60b7043d988115ea5c276c951ccede201782464a6cfec342c80b4c7b817ab5 +size 115970 diff --git a/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_model.json b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_model.json new file mode 100644 index 0000000000000000000000000000000000000000..95bdf1c20ccee6d7631f669b5e7f22e52982e3f1 --- /dev/null +++ b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:019f4799f34b934921983c56505907af5a9ebc148d85aa5472e47ada96f5f2df +size 144863 diff --git a/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_origin.pdf b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..59964e7f570d583b702ba46b1d6ca130123d5201 --- /dev/null +++ b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/cad58b3c-c8ca-4697-bc6d-85f4b57f8922_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fbf47b9fc3dd8c9779b5da31153bb1d93bc7b2acecaae14ad9421935fb8b47c +size 1157626 diff --git a/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/full.md b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/full.md new file mode 100644 index 0000000000000000000000000000000000000000..41d8539480cb4575ed0c927b28f5831846224a89 --- /dev/null +++ b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/full.md @@ -0,0 +1,382 @@ +# Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarily Pre-trained Transformer + +Yulun Wu $^{1}$ Doron L. Bergman $^{2}$ + +# Abstract + +We present an Adversarially Pre-trained Transformer (APT) that is able to perform zero-shot meta-learning on tabular prediction tasks without pre-training on any real-world dataset, extending on the recent development of Prior-Data Fitted Networks (PFNs) and TabPFN. Specifically, APT is pre-trained with adversarial synthetic data agents, who continue to shift their underlying data generating distribution and deliberately challenge the model with different synthetic datasets. In addition, we propose a mixture block architecture that is able to handle classification tasks with arbitrary number of classes, addressing the class size limitation – a crucial weakness of prior deep tabular zero-shot learners. In experiments, we show that our framework matches state-of-the-art performance on small classification tasks without filtering on dataset characteristics such as number of classes and number of missing values, while maintaining an average runtime under one second. On common benchmark dataset suites in both classification and regression, we show that adversarial pre-training was able to enhance TabPFN's performance. In our analysis, we demonstrate that the adversarial synthetic data agents were able to generate a more diverse collection of data compared to the ordinary random generator in TabPFN. In addition, we demonstrate that our mixture block neural design has improved generalizability and greatly accelerated pre-training. + +# 1. Introduction + +In standard deep learning workflows, models are either trained per dataset, or employed on data in a form com + +1University of California, Berkeley 2Capital One. Correspondence to: Yulun Wu . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +patible with, and drawn from, the same distribution as the datasets it was previously trained on. Even in transfer learning, where the target of the model is changed, the input is at most expanded, but at least overlaps heavily with the data distribution that the model has previously seen in training. This is in contrast with meta learning (Finn et al., 2017; Nichol & Schulman, 2018; Lemke et al., 2015; Vanschooren, 2018; Feurer et al., 2022; Hospedales et al., 2021; Zintgraf et al., 2021), where a model is trained to be adaptive to new datasets such that few gradient updates or fine-tuning are needed, instead of training a new model specialized to every distinct dataset from scratch. In meta learning, rather than modeling a specific dataset, the model is trained to learn how to learn. This has multiple advantages. First, meta learning is highly adaptable (Huisman et al., 2021; Finn et al., 2017; Frans & Witkowski, 2021) – it learns more generalized representations that can be adapted to new tasks and different domains. Second, meta learning makes efficient use of data (Finn et al., 2017; Gevaert, 2021) – it supports learning from just a few samples. Third, as a consequence of its efficient use of (small) data, the model can reach a point where it is able to make meaningful predictions very quickly (Vanschooren, 2018). + +In prior work, Verma et al. (2020) discussed the notion of zero-shot meta-learning. They train a generative adversarial network conditioned on class attributes, that can generate novel (previously unseen) class samples. This relies on the inputs present in the training data (class attributes) to be indicative of the new unseen classes. While they do not use gradient updates on the unseen data for prediction, they rely on the input data coming at the very least from a very similar distribution to that of the training data. The scope of problems this work aims to address is pristine zero-shot meta learning: given an unseen dataset from an unseen task after the model is pre-trained and deployed, can we do prediction on this dataset without training the model on it? Specifically, with zero gradient update on the model, and with no reliance on the context similarity between this dataset and the datasets that the model was pre-trained on. Note that this concept of zero-shot is slightly different from that in large vision and language models (Mann et al., 2020; Perez et al., 2021; Tsimpoukelli et al., 2021; Cahyawijaya et al., 2024; Ahmed & Devanbu, 2022) – the unseen datasets can + +entail heterogeneous fields or class labels that were never observed during pre-training, and zero-shot in this context refers to the amount of model optimization conducted being zero given the unseen dataset rather than the amount of empirical examples seen being zero. The advantage of successfully establishing such a model is the exceptional generalizability and runtime. + +A few recent breakthroughs (Müller et al., 2021; Hollmann et al., 2022) have demonstrated that achieving this aspiration is possible: Müller et al. (2021) introduced Prior-Data Fitted Networks (PFNs). They pursue zero-shot meta-learning by using transformers pre-trained on synthetic data generated from a collection of prior distributions, to perform approximate Bayesian inference using in-context learning (Luo et al., 2018; Mann et al., 2020). PFNs do not fit a model on downstream training data, instead feeding training data into the context in forward pass and making predictions conditioned on the context. Hollmann et al. (2022) introduced a PFN specifically aimed at tabular datasets – TabPFN. A more detailed background review on PFNs and specifically TabPFN can be found in Appendix A. Tabular data – data organized in rows and columns, and characterized by an unlimited heterogeneity of data fields, remains an area of machine learning where deep neural networks (DNNs) still struggle (Borisov et al., 2022; Shwartz-Ziv & Armon, 2022; McElfresh et al., 2024; Ye et al., 2024b) to push the boundaries of the state-of-the-art gradient boosted decision trees (GBDTs) (Prokhorenkova et al., 2018; Chen & Guestrin, 2016; Ke et al., 2017), despite numerous approaches (Borisov et al., 2022; Somepalli et al., 2021; Grinsztajn et al., 2022; Gorishniy et al., 2021; Rubachev et al., 2022; Levin et al., 2022; Kadra et al., 2021a; Arik & Pfister, 2021; Popov et al., 2019). Yet, tabular data is one of the most common data types in real-world machine learning (ML) applications (Chui et al., 2018; Borisov et al., 2022; Shwartz-Ziv & Armon, 2022). Although TabPFN has demonstrated exceptional zero-shot meta-learning capability on certain small tabular prediction tasks, we show that the distribution of synthetic data used in its pre-training is actually quite limited. Besides, the class size constraints of TabPFN pose a significant limitation on its generalizability – this might not be an important concern for the traditional one-model-for-one-domain pipeline, but is a crucial weakness for a zero-shot meta-learner (ZSML) since an unprecedented number of class labels could be present in inference time. Note that zero-shot meta-learning is largely similar to foundation modeling but slightly different in its scale and objective – it does not necessarily involve billions of parameters to learn the distribution of data and acquire token representations in a broad domain such as language or health records, but to model the general prediction logic and learn how to acquire data representations in unseen domains during inference time. + +Similar to Hollmann et al. (2022), we investigate the capa + +bility of zero-shot meta-learning under the scope of tabular data prediction problems. Our contributions are listed as follows: + +- We propose an adversarial synthetic data pre-training approach on PFNs to establish a zero-shot meta-learner that is able to handle tabular prediction tasks with improved performance. +- We eliminated the class size limitation for TabPFN on classification tasks by proposing the mixture block neural design, which yields a zero-shot meta-learner with better generalizability. +- In experiments, we show that our framework achieves state-of-the-art performance on small tabular classification tasks without filtering on class size, feature size, number of categorical features or number of missing values, and improved upon TabPFN in both classification and regression. We show that the adversarial data agents are able to enrich the synthetic data generating distribution, and the mixture block is able to generalize to unseen class size and accelerate pre-training. + +# 2. Proposed Method + +Our Adversarily Pre-trained Transformer (APT) model is pre-trained once offline using a mix of random synthetic data generators and adversarial synthetic data agents. In this phase, the goal of the model is not to learn the specific pattern or probability distribution of any given dataset, but to learn the general prediction logic and means to represent various data, i.e. learning to learn. Once pre-trained and deployed, the model makes predictions on the testing set of any real-world dataset of interest in one forward pass, without performing any back-propagation or gradient updates of its weights. A demonstration of the workflow is shown in Figure 1. In Section 2.1, we describe the adversarial data agents in detail, whose goal is to continuously produce diverse and more challenging datasets for the meta-learning model during pre-training; in Section 2.2, we elaborate on the architecture of our transformer model, which has no restrictions on the class size of any real-world datasets practitioners provide. + +# 2.1. Adversarial Data Agents + +In the pre-training phase, we compose a batch of $m$ datasets $\{X^{(k)},\pmb{y}^{(k)}\}_{1\leq k\leq m}$ in each iteration using $m$ different data generators $\{g_1,\dots ,g_m\}$ that each independently generate $n$ number of data points, where $X^{(k)} = [x_i^{(k)}]_{i\leq n}^{\top} = [x_{i,j}^{(k)}]_{i\leq n,j\leq d_k}$ and $\pmb{y}^{(k)} = [y_i^{(k)}]_{i\leq n}^{\top}$ are the predictor matrix and response vector (denoted as $X$ and $\pmb{y}$ when no index is specified) with feature size $d_{k}$ . We adopted the multi-layer perceptron (MLP) construction introduced in + +![](images/780b3b253925c37bf10c14a1eccdfe91a009c8ee31cfab1aedceeaffb2932f14.jpg) +Figure 1. The model workflow of Adversarially Pre-trained Transformer (APT). Pre-training is done once, offline, with datasets generated by a mix of random synthetic data generators and adversarial synthetic data agents. The train-test split is randomly sampled for each batch of datasets. After the model is pre-trained and deployed, predictions are done per real-world dataset, online, with one forward pass and no parameter update. The transformer is test-masked, meaning that each token only attends to training data tokens. For cleanliness of the figure, only the attentions to and from the first training data token and the first testing data token are plotted. + +Hollmann et al. (2022) for each generator instance, where predictors $\boldsymbol{x}_i^{(k)}$ and response $y_i^{(k)}$ are values of randomly selected neurons in sparsified noisy MLPs with some additional pre-processing. More details regarding this approach can be found in Appendix A.1. + +Different from Hollmann et al. (2022), instead of generating datasets solely from randomly initialized sparse MLPs, a subset of the $m$ generators in our framework are adversarial agents that learn from the model's performance on the generated data, and perform gradient ascent on the model's prediction loss. In other words, these adversarial agents challenge the model by constantly shifting the synthetic data generating distributions to deliberately produce datasets that are more difficult for the model to handle. The loss for an adversarial agent $g_{\eta}$ with respect to prediction model $q_{\theta}$ can be written as + +$$ +\mathcal {L} \left(g _ {\eta}\right) = \mathbb {E} _ {X, \boldsymbol {y} \sim g _ {\eta}} \log q _ {\theta} \left(\boldsymbol {y} _ {(l + 1): n} \mid X _ {(l + 1): n}, \left\{X _ {1: l}, \boldsymbol {y} _ {1: l} \right\}\right) \tag {1} +$$ + +where $\{X_{1:l},\pmb{y}_{1:l}\}$ and $\{X_{(l + 1):n},\pmb{y}_{(l + 1):n}\}$ are the training and testing set split from generated dataset $\{X,\pmb{y}\}$ at position $l$ . In the following sections, we refer to the former (generators based on randomly initialized MLPs) as ordinary data generator, and the latter (generators based on adversarially updated MLPs) as adversarial data agents. + +Relation to Classic Adversarial Training In relation to GANs (Goodfellow et al., 2014), the data agents here are the generators, and the meta-learner is the discriminator. Contrary to classic adversarial training, there is no real versus fake samples for the discriminator to distinguish in this context. The generator (data agent) and the discriminator (meta-learner) have one coherent competing objective: the meta-learner seeks to minimize the prediction loss on data generated by the data agents, while the data agent seeks to generate data that maximize the prediction loss by the meta-learner. As a result, the desired gradients for updating the discriminator is but a flip of sign to its gradients calculated through back propagation on the generator's objective. Hence, both the meta-learner and the data agents can be updated in one single iteration after loss calculation in this scenario. This results in a more efficient adversarial training, and we further reduce its potential of mode collapse with data agent reset described in the last paragraph of this section. Note that contrary to classic GANs, the discriminator is the final product in this context rather than the generator. + +Discretization of Variables A key challenge in establishing adversarial data agents is the gradient flow under discretization: how do we generate synthetic data with categorical features while being able to perform end-to-end loss back-propagation? Inspired by the Gumbel-Softmax + +![](images/04efe47b0d22a60faaa4a6f69b897ee306a52725b062f4d938ec50969fee1e69.jpg) +(a) Model architecture for classification tasks + +![](images/bac649588f0a734881cfca7f36df5930484863dc6ed7b1d493e6303c9e293a95.jpg) +(b) Mixture block +Figure 2. Model architecture and the mixture block. a) $X = (X^{\mathrm{train}}, X^{\mathrm{test}})$ and $y^{\mathrm{train}}$ are embedded on $\mathbb{R}^{d_{\mathrm{model}}}$ using a feature embedding block and linear projection respectively. Then, embeddings for $X^{\mathrm{train}}$ and $y^{\mathrm{train}}$ are added as $h^{\mathrm{train}}$ , embeddings for $X^{\mathrm{test}}$ are denoted as $h^{\mathrm{test}}$ . Embeddings $(h^{\mathrm{train}}, h^{\mathrm{test}})$ are then passed to the transformer blocks with attention towards test embedding $h^{\mathrm{test}}$ masked, same as Hollmann et al. (2022). Finally, the outputs from transformer blocks are transformed to class probabilities through the mixture block for classification tasks, or directly transformed to point predictions through standard dense final layer for regression tasks. b) For each data point in the testing set, we use its output $q$ after transformer blocks to query training data's outputs $K$ . With two different dense feedforwards, two sets of logits are predicted: one set of logits are used to calculate the scaled softmax probabilities – these probabilities indicate how likely that the testing point is in the same class as the corresponding training points; the other set of logits are used to sample soft-discrete binary gates via Concrete distribution to sparsify these probabilities. Finally, the gated probabilities from the same class are added together to yield the final predictions. + +trick (Jang et al., 2016) and the Concrete distribution (Maddison et al., 2016), we propose a continuous relaxation of discretization that naturally extends on the ranking discretization approach introduced in Hollmann et al. (2022), controlled by a user-specified temperature hyperparameter $\tau$ . For the $j$ -th feature column $\boldsymbol{x}_{..,j}$ of a predictor matrix $X$ and the corresponding $N_{j} - 1$ randomly sampled Gaussian quantiles $Q_{j}^{(1)} < Q_{j}^{(2)} < \dots < Q_{j}^{(N_{j} - 1)}$ at the initialization of the corresponding data agent, the soft-discretization that converts the $i$ -th value of the $j$ -th feature $x_{i,j}$ to a soft-categorical value with cardinality $N_{j}$ is given by + +$$ +\begin{array}{l} x _ {i, j} ^ {c a t} = \pi \left(\left| \left\{x _ {i, j} \geq \tilde {Q} _ {j} ^ {(l)} \right\} _ {l} \right|\right) + \\ \tau \cdot \log \left(1 + \frac {x _ {i , j} - \tilde {Q} _ {j} \binom {\left(\left|\left\{x _ {i , j} \geq \tilde {Q} _ {j} ^ {(l)} \right\} _ {l} \right|\right)} {\tilde {Q} _ {j} \left(1 + \left|\left\{x _ {i , j} \geq \tilde {Q} _ {j} ^ {(l)} \right\} _ {l} \right|\right) - \tilde {Q} _ {j} \binom {\left|\left\{x _ {i , j} \geq \tilde {Q} _ {j} ^ {(l)} \right\} _ {l} \right|}\right)}\left. \right) \tag {2} \\ \end{array} +$$ + +where $\pi$ is a permutation function on integer domain $\{1,2,\ldots ,N_{j} - 1\}$ $\tilde{Q}_j^{(l)} = \mu (\pmb{x}_{\cdot ,j}) + \sigma (\pmb{x}_{\cdot ,j})\cdot Q_j^{(l)}$ for $1\leq l < N_{j}$ are the unnormalized quantiles with boundaries $\tilde{Q}_j^{(0)} = \min (\pmb{x}_{\cdot ,j})$ and $\tilde{Q}_j^{(N_k)} = \max (\pmb{x}_{\cdot ,j})$ and $|\{v\geq \tilde{Q}_j^{(l)}\} _l| = \sum_lI(v\geq \tilde{Q}_j^{(l)})$ is the position of a value $v$ in the ordered sequence $\{\tilde{Q}_j^{(l)}\}_{1\leq l\leq N_j}$ . A visual demon + +stration of this conversion can be found on the right side of Figure 6 in the Appendix. Same as Hollmann et al. (2022), the extended ranking discretization approach decides the value of a categorical variable using only the continuous scalar $x_{i,j}$ , i.e. the value of one neuron in the sparsified noisy MLP, as opposed to the Gumbel-Softmax or Concrete distribution approach which would require selecting $N_j$ neurons as logits of the $N_j$ classes. In our early experiments, we found that sampling multiple neurons to decide the value of one categorical feature achieved significantly worse performance than ranking discretization. Furthermore, since we do not desire to learn the explicit form of these distributions, explicitly generating class logits is not a necessity, and hence we prefer a more efficient differentiable discretization technique that does not involve reparameterization tricks, softmax operations or excessive samplings. + +Data Agent Reset In terms of the diversity of generated data, there is a balance between adversarially updating the neurons in the MLPs and re-initializing the MLPs all together. Although in the short run, re-initializing the MLPs and the corresponding random factors (number of features, number of classes, etc.) instantaneously yield new datasets with a high chance of possessing much different fields and distributions from the previous, such generation is constrained by the domain of distribution defined by the preset + +range of hyperparameters in the long run (we show some evidence on this in Section 3.2). On the other hand, although adversarial data agents are performance-driven and could explore out-of-distribution regions better than random initialization, it also has the potential to converge to the Nash equilibrium and reach a stalemate with the meta-learner – for example, converging to a state where generated predictors $x$ and response $y$ have no correlation. Hence, we combine the two approaches and reset the adversarial data agents every $N_{e}$ epochs to avoid such convergence. To speak from the GANs angle, we are letting the discriminator, i.e. the meta-learner, to periodically gain an advantage and slightly beat the generator. Different from classic GANs, the discriminator is the desired model here while the generator is the supporting entity, hence exploration is more important than optimization for the generator in this context. + +# 2.2. Mixture Block Architecture + +Contrary to modern DNNs, traditional ML algorithms such as K-nearest neighbors and tree-based methods are more flexible in terms of their ability to handle varying cardinality of classification labels, in the sense that they do not entail fixed-size dense layer parameters that cannot generalize to a different classification task with different label cardinality. This is not much of an issue for the traditional one-model-for-one-dataset ML pipeline, but is of significant importance for zero-shot meta-learners, yet unaddressed in prior works. Inspired by how tree-based methods solve classification tasks in a manner that is compliant to the empirical values and cardinality of training labels, we propose a scatter-sum mixture block as the output prediction head for classification tasks that significantly departs from the ordinary dense final layer approach. A visual demonstration can be found on the right of Figure 2. For each data point in the testing set, we use its embedding after the transformer blocks to query the embeddings of training data, and yield two sets of logits via two separate feedforwards: one set of logits is used to calculate softmax probability weights of keys and the other set is used to sample soft-discrete gates via Concrete distribution (Maddison et al., 2016) to sparsify these weights. In essence, these gates govern the splits of training data in relation to the testing query, such that the final prediction only pays attention to a subset of relevant training data representations. In our preliminary experiments, we discovered that sparsifying attention through these gates are crucial to performance, and the mixture block works poorly without this component. The output class probabilities are then acquired by a scatter summation of non-gated values using their original labels as index. Relating to tree-based methods, the gates here are used to determine the subset of training data that are in the same split of leaf nodes as a given testing data point, and the weights are used to determine the relative importance of each label in that split. + +Contrary to tree-based methods, the splits are point-specific, i.e. there is a different split decided for each testing data point, and the decision within the split is weighted rather than via majority voting. Note that this approach does not change the order of computation complexity in terms of data size and data dimensions – it simply removes the final dense layer and adds two more multi-head attentions and feedforwards to the transformer architecture in a non-sequential manner. + +Large Data Size and Feature Size Compared to the class size limitation, the feature size limitation of PFNs is relatively less tricky in theory, and there are already a few straightforward solutions concurrent with this work (Hollmann et al., 2025; Qu et al., 2025) that extend TabPFN's capabilities in handling datasets with larger number of features, as well as larger number of samples. Besides, the data capacity of PFNs could be adequately expanded by incorporating some of the recent advancements in general transformer and state-space model research (Wu et al., 2022; Bulatov et al., 2023). Therefore, we do not put emphasis on addressing these problems in this work, and make two simple adaptations to APT based on patch embedding and batch aggregation in the event that prediction on large datasets is required. See Appendix B for details. Note that concurrent solutions such as Hollmann et al. (2025); Qu et al. (2025) do not pose conflict with our proposed architecture (mixture block only modifies the last layer of the model), thus can be naturally incorporated into our framework as the practitioners desire. + +# 3. Experiment + +We evaluated our model and competing algorithms on common ML benchmarking dataset suites for tabular classification and tabular regression problems. In Section 3.1, we show that APT achieves state-of-the-art performance on small tabular classification tasks with a runtime comparable to that of TabPFN. In Section 3.2, we present qualitative analysis on the impact and characteristics of the adversarial data agents. In Section 3.3, we demonstrate the generalizability of the mixture block and its effect on pre-training. In Section 3.4, we provide ablation study and show that adversarial pre-training was able to enhance the performance of TabPFN on both classification and regression tasks. + +Datasets For classification, we used the curated opensource OpenML-CC18 dataset suite (Bischl et al., 2021) containing 68 popular tabular benchmark datasets (4 vision datasets mnist_784, CIFAR_10, Devnagari-Script, and Fashion-MNIST are not treated as tabular and removed from the total 72 datasets), and our main results are presented on all small datasets (number of samples no larger than 2,000) in OpenML-CC18 similar to Hollmann et al. (2022). + +Table 1. Performance of algorithms on 35 small datasets with no larger than 2,000 data points in the OpenML-CC18 suite, given one hour of time budget. Note that there are two styles of standard deviation (std.) calculation for AUC: 1) first take the mean of AUC across datasets, then calculate the std. across splits (std. of mean), as used by TabPFN (Hollmann et al., 2022); 2) first calculate the std. across splits on each dataset, then take the mean across datasets (mean of std.), as used by TabZilla (McElfresh et al., 2024). Our result table largely adopted the style of TabZilla, but we present both std.'s here for clarity. The std. of mean shows variation on suite level, which is more likely to result in a statistical significance compared to mean of std., which shows average variation on dataset level. The mean of AUC taken across splits are used as the scoring metric to calculate "Rank" and "Wins" of each algorithm across datasets. If many algorithms are tied for first, a win is assigned to each first-place algorithm. Same as TabZilla (McElfresh et al., 2024), the table is ordered by the mean of rank. The full results on each dataset for top algorithms are shown in Table 5 of Appendix D. + +
Rank ↓ROC-AUC ↑Wins ↑Time (sec.) ↓ (Tune + Train + Predict)
meanmed.minmaxmeanstd. of meanmean of std.num.meanmed.
APT3.8631110.9210.0030.019130.900.40
CatBoost4.034190.9180.0020.02063542.423555.74
TabPFN4.5741110.9130.0030.02040.860.37
SVM4.8941120.9040.0030.023101175.58481.50
XGBoost5.3751100.9140.0060.02043607.783598.91
LightGBM5.6061110.9170.0030.01933542.943582.07
LASSO-Logistic6.6981120.9080.0010.02331519.411227.52
Ridge-Logistic6.9181110.9070.0010.02211479.93845.59
RandomForest7.1771120.9080.0030.02131736.711476.37
ResNet7.6991120.8250.0040.04033582.153597.41
KNN9.57111120.8840.0060.0241127.8277.31
SAINT9.97121120.7590.0170.07713597.413594.41
+ +except that 1) there is no additional filtering, i.e. all datasets regardless of number of classes, number of features, number of categorical features, and number of missing values are kept in our evaluation pool, composing a more general collection of datasets. This brings the number of datasets in the evaluation pool from 18 to 35; 2) The train-test split is set to 80-20 instead of the unconventional 50-50. For regression benchmarking, we used the curated open-source OpenML-CTR23 dataset suite (Fischer et al., 2023). + +Algorithms We compared APT to the top 3 GBDT algorithms (CatBoost (Prokhorenkova et al., 2018), XGBoost (Chen & Guestrin, 2016), LightGBM (Ke et al., 2017)) and the top 3 DNN methods (TabPFN (Hollmann et al., 2022), Tabular ResNet (Gorishniy et al., 2021), SAINT (Somepalli et al., 2021)) in the main experiments of TabZilla (McElfresh et al., 2024), as well as 5 standard machine learning algorithms (KNN (Cover & Hart, 1967), Ridge (Tikhonov, 1963), LASSO (Tibshirani, 1996), SVM (Cortes, 1995), Random Forest (Ho, 1995)). + +Hyperparameters The hyperparameter search space of benchmark models is directly inherited from Hollmann et al. (2022), and directly inherited from McElfresh et al. (2024) if the benchmark model is not in Hollmann et al. (2022). TabPFN is pre-trained with hyperparameters directly inherited from their released checkpoint, only changing the maximum number of classes from 10 to 26, which is the + +maximal class size of datasets in the OpenML-CC18 suite. For APT, all common hyperparameters shared with TabPFN are directly inherited from TabPFN. See Appendix C for more details. A total of $12.5\%$ of the data generators are adversarial data agents during the pre-training of APT, with learning rate $10^{-1}$ , weight decay $10^{-5}$ , soft-discretization temperature $10^{-2}$ , and 2,000 gradient steps between resets. + +# 3.1. APT Achieves State-of-the-art Performance on Small Tabular Classification Tasks + +We evaluated APT and benchmark models on small datasets in OpenML-CC18 using area under the receiver operating characteristic curve (ROC-AUC) with the one-vs-one (OVO) multi-class evaluation configuration, similar to Hollmann et al. (2022). Previously, Hollmann et al. (2022) has shown that TabPFN matches the performance of state-of-the-art GBDT algorithms and outperforms them on small datasets that have less than 100 features, less than 10 classes, no categorical features, and no missing values in their main results. In this work, we do not impose any of these restrictions to further examine APT's and TabPFN's zero-shot meta-learning capability. The results are presented in Table 1. For datasets with number of features larger than 100, we subsample 100 features similar to (McElfresh et al., 2024). In these experiments, APT achieved state-of-the-art performances with a runtime similar to that of TabPFN. The average runtime of APT increased by $4.6\%$ compared to + +![](images/71c7412377b7645590d17f204e7fe8365edd0881a0fbf0d423677c86737e4da0.jpg) +(a) Distribution of data in datasets generated by a set of ordinary data generators + +![](images/cd2642fb082b04698abb416aa3ad1a172ad9f1b74f00e443c485805b1548c0e3.jpg) +Figure 3. Contour plot of two-dimensional data generated by ordinary data generators and adversarial data agents. Each subplot contains a total of 100,000 data points from 2,000 datasets. Note that subplot (a) and subplot (b) are two independent sets of ordinary generators with no mutual, as each dataset is generated by an independently initialized random sparse neural network. Each dataset in subplot (c) is generated by an adversarial data agent after each consecutive loss back-propagation. + +![](images/f7c10b616198e668d3cafa7e9e537c7f0266890f38568a09717e1229f7047363.jpg) +(b) Distribution of data generated by another independent set of ordinary generators +(c) Distribution of data generated by a set of adversarial data agents + +TabPFN and remained within a second on GPU (NVIDIA H100), showing that neural modifications from the mixture block have not made APT significantly heavier. Note that there is no cherry-picking being performed on model checkpoints for APT – the APT model that we released and used for evaluations is the last model after the final iteration of pre-training. Realistically,PFN-based models are pre-trained on synthetic data, and picking checkpoints for evaluations ad hoc is not ideal unless using a whole different collection of real-world datasets for validation. But even in that case, it would still raise the concern of data leakage. + +In these experiments, the deep learning algorithms under the standard supervised learning pipeline, ResNet and SAINT, yielded subpar performances. Note that the computing budget in Hollmann et al. (2022) and ours is set to 1 hour per dataset per split contrary to the 10 hours in McElfresh et al. (2024). The deep learning algorithms under the zero-shot meta-learning pipeline, APT and TabPFN, yielded ideal performances, but it has been previously shown that TabPFN sees a significant drop in performance on datasets with categorical features or missing values (Hollmann et al., 2022). In Figure 4, we further break down the results on datasets with and without these characteristics. + +![](images/55418ad0e5d800772223d7dc8274b07b563d269d62ef9cafbf95c33afa0c31e2.jpg) +Figure 4. A breakdown of performance by dataset characteristics. The mean of ranks are plotted as orange on datasets with the respective characteristic, and as blue on datasets without the respective characteristic. + +From Figure 4, it can be observed that APT has fairly dealt with TabPFN's weakness in handling datasets with missing values, and has closed the gap between performance on datasets with and without categorical features compared to TabPFN, although GBDTs such as CatBoost still shows the greatest capability in handling datasets with categorical features. We further break down the performance contributions from each proposed component in Section 3.4. + +# 3.2. Qualitative Analysis of the Adversarial Data Agents + +Even though arbitrary MLPs have the potential to serve as universal function approximators given certain regularity constraints (Hornik et al., 1989), the pre-set hyperparameters (e.g. sampling distribution of neurons, sampling distribution of the number of layers, choices of activations, etc.) as well as the lack of gradient updates restrict the family of data distributions that randomly initialized sparse neural networks can put forward in practice. As shown in Figure 3, the distribution of two-dimensional data generated by two whole different sets of random sparse neural networks align fairly precisely with merely 2,000 independent initializations. On the contrary, even without resetting neural architecture and neural parameters, the adversarial data agents still managed to generate a more diverse collection of data and diffuse the concentrated peaks presented in the density distribution of data generated by ordinary data generators. To be exact, for a collection of 2,000 datasets generated by ordinary data generators, we evaluated a KL-divergence of $0.134 \pm 0.141$ between it and a collection of 2,000 datasets generated by another set of ordinary data generators, and a KL-divergence of $0.813 \pm 0.072$ between it and a collection of 2,000 datasets generated by adversarial data agents. + +As a motivation of imposing data agent reset, we were wary that the data agents after many adversarial updates could yield synthetic datasets whose features have little to no + +signal on the response variable. With our hyperparameter settings, we have not observed such behavior and to our surprise, the synthetic datasets generated by adversarial agents exhibit slightly stronger signal with a Pearson correlation of $0.311 \pm 0.026$ between predictors and responses on datasets with two-dimensional features as oppose to the $0.268 \pm 0.013$ of ordinary data generators. We postulate that this is partially in consequence of the high reset frequency and high generator learning rate. + +# 3.3. Generalizability of the Mixture Block + +After a ZSML is deployed, one should not be required to re-do its pre-training given certain characteristics of the datasets in evaluation pool that the model cannot handle, and this is why the mixture block architecture is important. For TabPFN, we have to look at the evaluation dataset pool first, calculate the largest class size, before using it as a hyperparameter for pre-training. This is not a procedure that fits well into the zero-shot learning concept. Our proposed mixture block architecture does not have such class size limitation, and we show the performance of APT on datasets with more than 10 classes in OpenML-CC18 in Table 2. + +Table 2. The ROC-AUC on datasets with more than 10 classes. APT pre-trained on datasets with a maximum of 10 classes is able to match APT without mixture block pre-trained on datasets with a maximum of 26 classes on 3 of the 4 datasets. + +
letterisoletvoweltexture
APT w/o Mixture.975 ± .002.970 ± .0031 ± 01 ± 0
APT.975 ± .002.939 ± .0111 ± 01 ± 0
+ +Interestingly, the mixture block's generalizability significantly accelerated pre-training in our experiments. The ROC-AUC evaluated after each iteration of pre-training with and without the mixture block is presented in Figure 5. Note that ensembling over permutations (Hollmann et al., 2022) is not performed in this experiment as it would dramatically increase runtime given that evaluation is performed following every gradient step. + +![](images/5d1530aad58772b7627e576989198928ad1be282b3357b4a8050bbf747b2e5bb.jpg) +Figure 5. ROC-AUC on the 35 small datasets in OpenML-CC18 evaluated after each of the first 30,000 gradient steps. + +From Figure 5, we can see that models with mixture block + +learn remarkably faster than models without it. For APT, the model reaches an AUC of 0.70 in merely 40 gradient steps, an AUC of 0.80 in 591 gradient steps and 0.90 in 11,780 gradient steps. + +# 3.4. Ablation Study + +Classification Although we discovered that the mixture block gives the model a nice performance acceleration in the previous section, the original purpose of designing such architecture was not performance-driven, and we still expect that the final performance improvement was largely contributed by the adversarial pre-training. We present ablation study in Table 3 to verify this expectation. + +Table 3. Ablation study on tabular classification. Note that APT is TabPFN with adversarial pre-training and mixture block. + +
SmallAll
mean AUC ↑rank ↓mean AUC ↑rank ↓
APT0.921 ± 0.0032.11 ± 0.160.918 ± 0.0062.1 ± 0.2
APT w/o Mixture0.917 ± 0.0052.09 ± 0.060.917 ± 0.0052.1 ± 0.1
TabPFN w/ Mixture0.914 ± 0.0042.55 ± 0.220.914 ± 0.0052.6 ± 0.2
TabPFN0.913 ± 0.0032.49 ± 0.160.914 ± 0.0052.4 ± 0.2
+ +Unsurprisingly, models with and without the mixture block did not dominate each other on mean AUC and rank collectively. Note that the mixture block was proposed for generalizing on datasets with unseen number of classes, and we expect it to have little to no impact on datasets with seen number of classes performance-wise. + +Regression Although ZSMLs are gradually catching up with GBDTs on classification problems and likely reached a performance mark close to saturation on small classification problems, tabular regression remains an area where ZSMLs have not yet shown exceptional performance. We additionally report a study on the 35 datasets in OpenML-CTR23 regression suite in Table 4, and show the progress APT has made on regression tasks over TabPFN. + +Table 4. Ablation study on tabular regression. Small datasets are the 12 datasets in OpenML-CTR23 with data size no larger than 2,000. Note that APT is TabPFN with adversarial pre-training in this setting, since the mixture block was only used for classification tasks. + +
SmallAll
mean MSE ↓wins ↑mean MSE ↓wins ↑
TabPFN0.412 ± 0.0773.8 ± 1.20.340 ± 0.0256.4 ± 1.4
APT0.344 ± 0.0688.2 ± 1.20.306 ± 0.02328.6 ± 1.4
+ +From Table 4, it can be observed that incorporating adversarial pre-training has boosted the performance of TabPFN, yielding a larger number of wins with a significant margin. Note that we used the exact same synthetic data sampling distributions and hyperparameters that were used in TabPFN + +for the purpose of ablation, in order to clearly demonstrate the contribution of adversarial training. TabPFN was trained only on classification problems, and therefore it is possible that these hyperparameters are over-optimized for classification tasks, and under-optimized for regression tasks. + +# 4. Related Work + +# 4.1. Tabular Learning + +GBDTs such as XGBoost and others (Chen & Guestrin, 2016; Prokhorenkova et al., 2018; Ke et al., 2017) are commonly used for tabular data problems, in the traditional one-model-for-one-dataset approach. At this point, numerous deep learning approaches have been developed for tabular data, mostly taking the one-model-for-one-dataset approach (Borisov et al., 2022; Somepalli et al., 2021; Gorishniy et al., 2021; Rubachev et al., 2022; Kadra et al., 2021a; Arik & Pfister, 2021; Popov et al., 2019; Arik & Pfister, 2021; Kotelnikov et al., 2023; Gorishniy et al., 2024; 2022; Chen et al., 2024; Kadra et al., 2021b; Huang et al., 2020), but some also venturing into transfer learning, many but not all leveraging large language models to find relevant information for the tabular data problem at hand (Levin et al., 2022; Yan et al., 2024; Borisov et al., 2023; Ye et al., 2024a; Spinaci et al., 2024; Hegselmann et al., 2023; Kim et al., 2024; Zhu et al., 2023). + +Tabular Meta-Learning Auto-Sklearn introduced in Feurer et al. (2015) and improved upon in Feurer et al. (2022) use Bayesian optimization to determine the best algorithm and feature pre-processing steps for modeling a given dataset. Meta learning is used for initializing the Bayesian optimization. In contrast to Auto-Sklearn and methods of transfer learning for tabular data, TabPFN (Müller et al., 2021) is trained solely on synthetic data to learn the general prediction logic of tabular classification and to acquire meaningful data representations in inference time. Helli et al. (2024) introduced a variant of TabPFN that was trained on a drifting synthetic data distribution, but the drift is independent of the performance of the model being optimized. + +# 4.2. Zero-shot Learning + +Recent work such as Xian et al. (2018; 2017); Chang et al. (2008); Larochelle et al. (2008); Palatucci et al. (2009) have shown impressive capability of zero-shot learning in the space of language and vision problems. Recent approaches to zero-shot or few-shot learning for tabular data problems mostly encode tabular data as language, and then leverage large language models (LLMs) for their zero- or few-shot capabilities (see Hegselmann et al. (2023); Nam et al. (2023); Gardner et al. (2024)). These approaches rely on relevant information about the tabular data existing in LLMs – this is most obviously the case when column names are mean- + +ingful, but not guaranteed for broad tabular data problems. + +# 4.3. Adversarial Training + +Upon generative adversarial networks (GANs) (Goodfellow et al., 2015; Madry et al., 2018; Kurakin et al., 2017), recent work such as Shafahi et al. (2019) improved on the efficiency by combining the back-propagation steps of the generator and discriminator. However, this method has been shown to suffer from catastrophic overfitting (Andriushchenko & Flamarion, 2020; Kim et al., 2021) without further modifications. Other works focusing on improving the efficiency of GAN training include Wong et al. (2020) and Zhang et al. (2019) where they restrict most of the forward and back propagation within the first layer of the network during adversarial updates. Zhang et al. (2021) in particular noted that weight updates frequently go back and forth in opposite directions in one training epoch, suggesting those updates might be redundant. Many other variations have been introduced to mitigate vanishing gradient and additional challenges of GAN training (Jabbar et al., 2021): failing at finding a Nash-equilibrium (Ratliff et al., 2016), and internal covariate shift (Ioffe, 2015). + +# 5. Conclusion + +In this work, we gave the first effort in exploring the adversarial pre-training of deep zero-shot meta-learners, specifically PFNs. We proposed APT, a zero-shot meta-learner that improves the performance of TabPFN on tabular prediction tasks and matches state-of-the-art GBDTs on small tabular classification tasks. In addition, we proposed a mixture block neural design to eliminate the class size restriction of PFNs, addressing a crucial issue in their generalizability to broad classification problems. As for limitations, APT does not outperform GBDTs on large tabular datasets, and shares the quadratic computational complexity of TabPFN. Hence, extensions of this work could explore means of acquiring data representations in a more inexpensive manner. For example, considerable research in recent years has significantly accelerated the transformer and increased its context length (Wu et al., 2022; Bulatov et al., 2023). It is a worthwhile effort for future research to apply these advancements to APT as well as other PFNs. Besides, future research could extend the mixture block to standard (non-zero-shot) classification settings in light of its ability to generalize and greatly accelerate convergence, which could improve the performance of traditional DNNs on small classification datasets. Mixture block or other alternatives to the dense final layer could also be explored in both standard and zero-shot regression settings, which could have an impact on the inductive bias of DNNs and their underperformance in comparison to GBDTs (Grinsztajn et al., 2022) under certain tabular data nature. + +# Acknowledgements + +We thank Tyler Farnan, Gang Mei, and C. Bayan Bruss for the insightful discussions. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Ahmed, T. and Devanbu, P. Few-shot training llms for project-specific code-summarization. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, pp. 1-5, 2022. +Andriushchenko, M. and Flammarion, N. Understanding and improving fast adversarial training. Advances in Neural Information Processing Systems, 33:16048-16059, 2020. +Arik, S. Ö. and Pfister, T. Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 6679-6687, 2021. +Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn, J. N., and Vanschoren, J. Openml benchmarking suites. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 2021. +Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., and Kasneci, G. Deep neural networks and tabular data: A survey. IEEE transactions on neural networks and learning systems, 2022. +Borisov, V., Sessler, K., Leemann, T., Pawelczyk, M., and Kasneci, G. Language models are realistic tabular data generators. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=cEygmmQNOeI. +Bulatov, A., Kuratov, Y., and Burtsev, M. S. Scaling transformer to 1m tokens and beyond with RMT. CoRR, abs/2304.11062, 2023. doi: 10.48550/ARXIV.2304.11062. URL https://doi.org/10.48550/arXiv.2304.11062. +Cahyawijaya, S., Lovenia, H., and Fung, P. Llms are few-shot in-context low-resource language learners. arXiv preprint arXiv:2403.16512, 2024. +Chang, M.-W., Ratinov, L.-A., Roth, D., and Srikumar, V. Importance of semantic representation: Dataless classification. In Aaii, volume 2, pp. 830-835, 2008. + +Chen, J., Yan, J., Chen, Q., Chen, D. Z., Wu, J., and Sun, J. Can a deep learning model be a sure bet for tabular prediction? In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '24, pp. 288-296, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400704901. doi: 10.1145/3637528.3671893. URL https://doi.org/10.1145/3637528.3671893. +Chen, T. and Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785-794, 2016. +Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., and Malhotra, S. Notes from the ai frontier: Insights from hundreds of use cases. McKinsey Global Institute, 2:267, 2018. +Cortes, C. Support-vector networks. Machine Learning, 1995. +Cover, T. and Hart, P. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21-27, 1967. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. CoRR, abs/2010.11929, 2020. URL https://arxiv.org/abs/2010.11929. +Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems 28 (2015), pp. 2962-2970, 2015. +Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. Auto-sklearn 2.0: Hands-free automl via meta-learning. Journal of Machine Learning Research, 23:1-61, 2022. +Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126-1135. PMLR, 2017. +Fischer, S. F., Feurer, M., and Bischl, B. Openml-ctr23-a curated tabular regression benchmarking suite. In AutoML Conference 2023 (Workshop), 2023. +Frans, K. and Witkowski, O. Population-based evolution optimizes a meta-learning objective. arXiv preprint arXiv:2103.06435, 2021. + +Gardner, J., Perdomo, J. C., and Schmidt, L. Large scale transfer learning for tabular data via language modeling, 2024. URL https://arxiv.org/abs/2406.12031. +Gevaert, O. Meta-learning reduces the amount of data needed to build ai models in oncology. British Journal of Cancer, 125(3):309-310, 2021. +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. +Goodfellow, I., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6572. +Gorishniy, Y., Rubachev, I., Khrulkov, V., and Babenko, A. Revisiting deep learning models for tabular data. Advances in Neural Information Processing Systems, 34: 18932-18943, 2021. +Gorishniy, Y., Rubachev, I., and Babenko, A. On embeddings for numerical features in tabular deep learning. In NeurIPS, 2022. +Gorishniy, Y., Rubachev, I., Kartashev, N., Shlenskii, D., Kotelnikov, A., and Babenko, A. Tabr: Tabular deep learning meets nearest neighbors. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=rhgIgTSSxW. +Grinsztajn, L., Oyallon, E., and Varoquaux, G. Why do tree-based models still outperform deep learning on typical tabular data? Advances in neural information processing systems, 35:507-520, 2022. +Hegselmann, S., Buendia, A., Lang, H., Agrawal, M., Jiang, X., and Sontag, D. Tabllm: Few-shot classification of tabular data with large language models. In Ruiz, F., Dy, J., and van de Meent, J.-W. (eds.), Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pp. 5549-5581. PMLR, 25-27 Apr 2023. URL https://proceedings.mlr.press/v206/hecselmann23a.html. +Helli, K., Schnurr, D., Hollmann, N., Müller, S., and Hutter, F. Drift-resilient tabPFN: In-context learning distribution shifts on tabular data. In AutoML Conference 2024 (Workshop Track), 2024. URL https://openreview.net/forum?id=VbmqcoHpGT. +Ho, T. K. Random decision forests. In Proceedings of 3rd international conference on document analysis and recognition, volume 1, pp. 278-282. IEEE, 1995. + +Hollmann, N., Müller, S., Eggensperger, K., and Hutter, F. Tabpfn: A transformer that solves small tabular classification problems in a second. arXiv preprint arXiv:2207.01848, 2022. +Hollmann, N., Müller, S., Purucker, L., Krishnakumar, A., Körfer, M., Hoo, S. B., Schirrmeister, R. T., and Hutter, F. Accurate predictions on small data with a tabular foundation model. Nature, 637(8045):319-326, 2025. +Hornik, K., Stinchcombe, M., and White, H. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359-366, 1989. +Hospedales, T., Antoniou, A., Micaelli, P., and Storkey, A. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(9):5149-5169, 2021. +Huang, X., Khetan, A., Cvitkovic, M., and Karnin, Z. Tabtransformer: Tabular data modeling using contextual embeddings, 2020. URL https://arxiv.org/abs/2012.06678. +Huisman, M., Van Rijn, J. N., and Plaat, A. A survey of deep meta-learning. Artificial Intelligence Review, 54(6): 4483-4541, 2021. +Ioffe, S. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. +Jabbar, A., Li, X., and Omar, B. A survey on generative adversarial networks: Variants, applications, and training. ACM Computing Surveys (CSUR), 54(8):1-49, 2021. +Jang, E., Gu, S., and Poole, B. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. +Kadra, A., Lindauer, M., Hutter, F., and Grabocka, J. Well-tuned simple nets excel on tabular datasets. Advances in neural information processing systems, 34:23928-23941, 2021a. +Kadra, A., Lindauer, M., Hutter, F., and Grabocka, J. Well-tuned simple nets excel on tabular datasets. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021b. URL https://openreview.net/forum?id=d3k38LTDCyO. +Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30, 2017. + +Kim, H., Lee, W., and Lee, J. Understanding catastrophic overfitting in single-step adversarial training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8119-8127, 2021. +Kim, M. J., Grinsztajn, L., and Varoquaux, G. CARTE: Pretraining and transfer for tabular learning. In Forty-first International Conference on Machine Learning, 2024. URL https://openreview.net/forum?id=9kArQnKLDp. +Kotelnikov, A., Baranchuk, D., Rubachev, I., and Babenko, A. TabDDPM: Modelling tabular data with diffusion models, 2023. URL https://openreview.net/forum?id=EJka_dVXEcr. +Kurakin, A., Goodfellow, I. J., and Bengio, S. Adversarial machine learning at scale. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=BJm4T4Kgx. +Larochelle, H., Erhan, D., and Bengio, Y. Zero-data learning of new tasks. In AAAI, volume 1, pp. 3, 2008. +Lemke, C., Budka, M., and Gabrys, B. Metalearning: a survey of trends and technologies. Artificial intelligence review, 44:117-130, 2015. +Levin, R., Cherepanova, V., Schwarzschild, A., Bansal, A., Bruss, C. B., Goldstein, T., Wilson, A. G., and Goldblum, M. Transfer learning with deep tabular models. arXiv preprint arXiv:2206.15306, 2022. +Luo, R., Tian, F., Qin, T., Chen, E., and Liu, T.-Y. Neural architecture optimization. Advances in neural information processing systems, 31, 2018. +Maddison, C. J., Mnih, A., and Teh, Y. W. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJzIBfZAb. +Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 1, 2020. +McElfresh, D., Khandagale, S., Valverde, J., Prasad C, V., Ramakrishnan, G., Goldblum, M., and White, C. When do neural nets outperform boosted trees on tabular data? Advances in Neural Information Processing Systems, 36, 2024. + +Müller, S., Hollmann, N., Arango, S. P., Grabocka, J., and Hutter, F. Transformers can do bayesian inference. arXiv preprint arXiv:2112.10510, 2021. +Nagler, T. Statistical foundations of prior-data fitted networks. In International Conference on Machine Learning, pp. 25660-25676. PMLR, 2023. +Nam, J., Tack, J., Lee, K., Lee, H., and Shin, J. STUNT: Few-shot tabular learning with self-generated tasks from unlabeled tables. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=__xlsjehDvlY. +Nichol, A. and Schulman, J. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(3):4, 2018. +Palatucci, M., Pomerleau, D., Hinton, G. E., and Mitchell, T. M. Zero-shot learning with semantic output codes. Advances in neural information processing systems, 22, 2009. +Perez, E., Kiela, D., and Cho, K. True few-shot learning with language models. Advances in neural information processing systems, 34:11054-11070, 2021. +Popov, S., Morozov, S., and Babenko, A. Neural oblivious decision ensembles for deep learning on tabular data arXiv preprint arXiv:1909.06312, 2019. +Press, O., Smith, N. A., and Lewis, M. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409, 2021. +Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. Catboost: unbiased boosting with categorical features. Advances in neural information processing systems, 31, 2018. +Qu, J., Holzmüller, D., Varoquaux, G., and Morvan, M. L. Tabicl: A tabular foundation model for in-context learning on large data. arXiv preprint arXiv:2502.05564, 2025. +Ratliff, L. J., Burden, S. A., and Sastry, S. S. On the characterization of local nash equilibria in continuous games. IEEE transactions on automatic control, 61(8): 2301-2307, 2016. +Rubachev, I., Alekberov, A., Gorishniy, Y., and Babenko, A. Revisiting pretraining objectives for tabular deep learning. arXiv preprint arXiv:2207.03208, 2022. +Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., Davis, L. S., Taylor, G., and Goldstein, T. Adversarial training for free! Advances in neural information processing systems, 32, 2019. + +Shwartz-Ziv, R. and Armon, A. Tabular data: Deep learning is not all you need. Information Fusion, 81:84-90, 2022. +Somepalli, G., Goldblum, M., Schwarzschild, A., Bruss, C. B., and Goldstein, T. Saint: Improved neural networks for tabular data via row attention and contrastive pretraining. arXiv preprint arXiv:2106.01342, 2021. +Spinaci, M., Polewczyk, M., Hoffart, J., Kohler, M. C., Thelin, S., and Klein, T. PORTAL: Scalable tabular foundation models via content-specific tokenization. In NeurIPS 2024 Third Table Representation Learning Workshop, 2024. URL https://openreview.net/forum?id=TSZQvknbLO. +Su, J., Ahmed, M., Lu, Y., Pan, S., Bo, W., and Liu, Y. Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021. +Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267-288, 1996. +Tikhonov, A. N. Solution of incorrectly formulated problems and the regularization method. Sov Dok, 4:1035-1038, 1963. +Tsimpoukelli, M., Menick, J. L., Cabi, S., Eslami, S., Vinyals, O., and Hill, F. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34:200-212, 2021. +Vanschoren, J. Meta-learning: A survey. arXiv preprint arXiv:1810.03548, 2018. +Verma, V. K., Brahma, D., and Rai, P. Meta-learning for generalized zero-shot learning. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 6062-6069, 2020. +Wong, E., Rice, L., and Kolter, J. Z. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BJx040EFvH. +Wu, Y., Rabe, M. N., Hutchins, D., and Szegedy, C. Memorizing transformers. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=TrjbxzRcnf-. +Xian, Y., Schiele, B., and Akata, Z. Zero-shot learning - the good, the bad and the ugly. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. +Xian, Y., Lampert, C. H., Schiele, B., and Akata, Z. Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 41(9):2251-2265, 2018. + +Yan, J., Zheng, B., Xu, H., Zhu, Y., Chen, D., Sun, J., Wu, J., and Chen, J. Making pre-trained language models great on tabular prediction. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=anzIzGZuLi. +Ye, C., Lu, G., Wang, H., Li, L., Wu, S., Chen, G., and Zhao, J. Towards cross-table masked pretraining for web data mining. In The Web Conference 2024, 2024a. URL https://openreview.net/forum?id=9jj7cMOXQo. +Ye, H.-J., Liu, S.-Y., Cai, H.-R., Zhou, Q.-L., and Zhan, D.-C. A closer look at deep learning on tabular data. CoRR, abs/2407.00956, 2024b. URL https://doi.org/10.48550/arXiv.2407.00956. +Zhang, D., Zhang, T., Lu, Y., Zhu, Z., and Dong, B. You only propagate once: Accelerating adversarial training via maximal principle. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/812b4ba287f5ee0bc9d43bbf5bbe87fb-Paper.pdf. +Zhang, H., Shi, Y., Dong, B., Han, Y., Li, Y., and Kuang, X. Free adversarial training with layerwise heuristic learning. In International Conference on Image and Graphics, pp. 120-131. Springer, 2021. +Zhu, B., Shi, X., Erickson, N., Li, M., Karypis, G., and Shoaran, M. Xtab: Cross-table pretraining for tabular transformers. arXiv preprint arXiv:2305.06090, 2023. +Zintgraf, L., Schulze, S., Lu, C., Feng, L., Igl, M., Shiarlis, K., Gal, Y., Hofmann, K., and Whiteson, S. Varibad: Variational bayes-adaptive deep rl via meta-learning. Journal of Machine Learning Research, 22(289):1-39, 2021. URL http://jmlr.org/papers/v22/21-0657.html. + +# A. Background + +In this section, we give a brief introduction to PFNs, and specifically the synthetic data generating mechanism of TabPFN. For a more complete description, see Müller et al. (2021); Hollmann et al. (2022); Nagler (2023). Given training dataset $D^{\mathrm{train}} = (X^{\mathrm{train}}, \mathbf{y}^{\mathrm{train}})$ , the goal is to approximate the conditional outcome distribution $y^{\mathrm{test}} \sim p(\cdot | \mathbf{x}^{\mathrm{test}}, D^{\mathrm{train}})$ given a test point $\mathbf{x}^{\mathrm{test}}$ . In the Bayesian framework for supervised learning, the prior of the dataset is a hypothesis of the data generating mechanism $\phi$ drawn from hypothesis space $\Phi$ , under which $p(\cdot | \mathbf{x}^{\mathrm{test}}, D^{\mathrm{train}})$ is a posterior predictive distribution (PPD) and can be factorized as follows by the Bayes' rule: + +$$ +\begin{array}{l} p \left(\cdot | \boldsymbol {x} ^ {\text {t e s t}}, D ^ {\text {t r a i n}}\right) = \int_ {\phi \in \Phi} p \left(\cdot | \boldsymbol {x} ^ {\text {t e s t}}, \phi\right) p \left(\phi \mid D ^ {\text {t r a i n}}\right) d \phi (3) \\ = \int_ {\phi \in \Phi} p (\cdot | \boldsymbol {x} ^ {\text {t e s t}}, \phi) \frac {p (\phi) p (D ^ {\text {t r a i n}} | \phi)}{p (D ^ {\text {t r a i n}})} d \phi (4) \\ \propto \int_ {\phi \in \Phi} p (\cdot | \boldsymbol {x} ^ {\text {t e s t}}, \phi) p (D ^ {\text {t r a i n}} | \phi) p (\phi) d \phi . (5) \\ \end{array} +$$ + +PFNs conduct synthetic prior fitting by defining a family of data generating mechanisms $\Phi$ from which independent samples $\pmb{x}_i\sim p(\pmb {x}_i) = \mathbb{E}_{p(\phi)}[p(\pmb {x}_i|\phi)]$ and $y_{i}\sim p(y_{i}) = \mathbb{E}_{p(\phi)}[p(y_{i}|\pmb{x}_{i},\phi)]$ are drawn to compose feature matrix $(X^{\mathrm{train}},X^{\mathrm{test}})$ and response vector $(\pmb{y}^{\mathrm{train}},\pmb{y}^{\mathrm{test}})$ of a synthetic dataset $D = D^{\mathrm{train}}\cup D^{\mathrm{test}}$ , and use a transformer model $q_{\theta}(\cdot |X^{\mathrm{test}},D^{\mathrm{train}})$ to approximate $p(\cdot |X^{\mathrm{test}},D^{\mathrm{train}})$ by minimizing their expected divergence over the synthetic data distribution + +$$ +\mathbb {E} _ {p \left(D ^ {\text {t r a i n}}, X ^ {\text {t e s t}}\right)} \left[ \mathrm {K L} \left(p \left(\boldsymbol {y} ^ {\text {t e s t}} \mid X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}\right), q _ {\theta} \left(\boldsymbol {y} ^ {\text {t e s t}} \mid X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}\right)\right) \right]. \tag {6} +$$ + +Since + +$$ +\begin{array}{l} \nabla_ {\theta} \mathbb {E} _ {p (D ^ {\text {t r a i n}}, X ^ {\text {t e s t}})} \left[ \mathrm {K L} \left(p (\boldsymbol {y} ^ {\text {t e s t}} | X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}), q _ {\theta} (\boldsymbol {y} ^ {\text {t e s t}} | X ^ {\text {t e s t}}, D ^ {\text {t r a i n}})\right) \right] (7) \\ = \nabla_ {\theta} \mathbb {E} _ {p \left(D ^ {\text {t r a i n}}, X ^ {\text {t e s t}}\right)} \left[ H \left(p \left(\boldsymbol {y} ^ {\text {t e s t}} \mid X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}\right), q _ {\theta} \left(\boldsymbol {y} ^ {\text {t e s t}} \mid X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}\right)\right) \right] (8) \\ = \nabla_ {\theta} \mathbb {E} _ {p (D ^ {\text {t r a i n}}, D ^ {\text {t e s t}})} \left[ - \log q _ {\theta} \left(\boldsymbol {y} ^ {\text {t e s t}} \mid X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}\right) \right], (9) \\ \end{array} +$$ + +it is equivalent to minimizing the expected negative log-likelihood loss + +$$ +\mathcal {L} (q _ {\theta}) = \mathbb {E} _ {p (D)} \left[ - \log q _ {\theta} (\boldsymbol {y} ^ {\text {t e s t}} | X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}) \right]. \tag {10} +$$ + +TabPFN in particular, conducts synthetic prior fitting by defining a family of sparsified-random-MLP-based data generating mechanisms $\Phi$ , which we call ordinary data generators in the context of this paper. The following section gives a detailed description of the workflow of these generators. + +# A.1. Ordinary Data Generator + +To sample data generating mechanism $\phi \sim \Phi$ , TabPFN first initializes a random MLP by sampling a collection of hyperparameters from a pre-defined hyperparameter space, including number of layers, hidden size, activation function, dropout probability, noise scales, etc. Specifically, dropout probability is used to sparsify neural connections between neurons, and noise scales dictate the amount of random noise injected into neurons at each layer. After the sparsified noisy random MLP is initialized, TabPFN randomly selects a subset of neurons in this MLP to be predictors $x_{i}$ , and randomly select one neuron to be response $y_{i}$ . With $n$ different random inputs to the MLP, a dataset with $n$ instances of $(x,y)$ is thus generated. + +Discretization Since generated data are selected neurons from MLPs, their values are naturally continuous. To mimic real-world datasets that possess categorical features and to generate discrete class labels for classification tasks, TabPFN uses a ranking discretization approach that converts a subset of continuous values to discrete by designating certain quantile ranges of the continuous value $v$ to certain categories. A visual demonstration of this conversion can be found on the left side of Figure 6. + +Normalization The generated synthetic data (as well as real-world datasets during inference time) are normalized across samples within each dataset, with the range of the values clipped to four standard deviations. Although the meta-learner might see datasets with unseen fields and out-of-distribution predictor-response relations during inference time, this ensures that at least the range of values will not be out-of-distribution as well. + +![](images/204bd9552d297c431bcf7f04e2275fa71a6ae9047d9873f8cf5f10b656fc9858.jpg) +(a) Ranking Discretization + +![](images/9abdeed5c3ef4350cf18bc21d5822c90a43ffcadb7fa067feb2f527723ef222c.jpg) +(b) Ranking Soft-discretization +Figure 6. Discretization of continuous variables. $x$ -axis is the value generated by the data generator, and $y$ -axis is its value after discretization. The soft-discretization approach produces near-categorical features that are differentiable and thus do not disrupt gradient flow. Intuitively, the adversarial data agents will try to produce new value that escapes the range of the current category if the meta-learner becomes very good at identifying signal from the current category. However, the new category it escapes to is arbitrary and cannot be targeted by gradient updates, giving additional exploration potentials to the adversarial agents. + +# A.2. Limitations + +Although there is no theoretical limitation on the number of data PFNs can handle, the transformer architecture does entail significant computation complexity and memory usage for large datasets. Besides, given the nature of dense input embedding layer and dense final prediction layer, there is a theoretical limitation on the number of features and the number of classes that PFNs can handle. The former is less of an issue since feature selections or simply random sampling of features can be performed, and PFNs would still yield ideal performance as shown in McElfresh et al. 2024. The latter is a rather big problem for classification tasks because there is hardly any direct and effective work-around. + +# B. Handling of Large Datasets + +Since expanding TabPFN's capabilities in handling large datasets is not the focus of this work, we only provide two simple adaptations such that APT can practically handle datasets of this nature. We recommend that practitioners try out the concurrent and future developments in more involved model innovations for large datasets, but in case they do not wish to do so, the following approaches can serve as a baseline. + +# B.1. Uncertainty-based Batch Aggregation + +For datasets with large number of samples, to avoid calculating attention spanning all training data points which results in quadratic order of operations and memory usage with respect to data size, we estimate the PPD with batches drawn from the training set: + +$$ +p (\boldsymbol {y} ^ {\text {t e s t}} | X ^ {\text {t e s t}}, D ^ {\text {t r a i n}}) \approx \int_ {b} q _ {\theta} (\boldsymbol {y} ^ {\text {t e s t}} | X ^ {\text {t e s t}}, b) \cdot p (b | D ^ {\text {t r a i n}}), \tag {11} +$$ + +which is equivalent to drawing uniform samples from training set $D^{\mathrm{train}}$ and scale the resulting predictions with weights $p(b|D^{\mathrm{train}})$ . We cap the batch size at 3000 in alignment with McElfresh et al. (2024). For classification datasets with number of samples larger than 3000, we split the training set into batches and weigh the resulting predictions in proportion to the batch size (prediction from the last batch may have less weight than the others). For regression tasks, we parameterize model $q_{\theta}(y_i^{\mathrm{test}}|\boldsymbol{x}_i^{\mathrm{test}},b)$ as Gaussian distribution $(\mu_{\theta}(\boldsymbol{x}_i^{\mathrm{test}},b),\sigma_{\theta}(\boldsymbol{x}_i^{\mathrm{test}},b))$ similar to Hollmann et al. (2022), and directly produce the point estimation using the inverse variance estimator in inference time: + +$$ +\begin{array}{l} \mathbb {E} \left[ y _ {i} ^ {\text {t e s t}} \mid \boldsymbol {x} _ {i} ^ {\text {t e s t}}, D ^ {\text {t r a i n}} \right] \approx \int_ {b} \mathbb {E} _ {\theta} \left[ y _ {i} ^ {\text {t e s t}} \mid \boldsymbol {x} _ {i} ^ {\text {t e s t}}, b \right] \cdot p (b \mid D ^ {\text {t r a i n}}) (12) \\ = \left(\sum_ {k} \frac {1}{\sigma_ {\theta} ^ {2} \left(\boldsymbol {x} _ {i} ^ {\text {t e s t}} , b _ {k}\right)}\right) ^ {- 1} \sum_ {k = 1} ^ {N} \frac {1}{\sigma_ {\theta} ^ {2} \left(\boldsymbol {x} _ {i} ^ {\text {t e s t}} , b _ {k}\right)} \mu_ {\theta} \left(\boldsymbol {x} _ {i} ^ {\text {t e s t}}, b _ {k}\right). (13) \\ \end{array} +$$ + +The intuition is that, prediction on each batch is weighted by its uncertainty - more weights are put to the predictions that the model is more certain of, and vice versa. + +# B.2. Patch-based Feature Embedding + +We drew inspiration from Dosovitskiy et al. (2020) and developed a patch-based embedding approach that adapts to datasets with arbitrary number of features. In Hollmann et al. (2022), embeddings of $x$ are acquired by padding or clipping the number of features $d_{k}$ to a certain maximum feature size $d^{*}$ , such that $x$ can be fed to a dense feedforward $e_{\theta}: \mathbb{R}^{d^{*}} \to \mathbb{R}^{d_{\mathrm{model}}}$ . Instead, we split features into patches, setting $d^{*}$ as the patch size, and only pad the last patch to $d^{*}$ dimensions if $d \neq 0$ (mod $d^{*}$ ). After feeding each patch to dense feedforward $e_{\theta}$ , we pass them to an attention block with optional relative positional encoding (Su et al., 2021; Press et al., 2021), and average pool across the resulting embeddings of patches. Essentially, this is a half-way approach between using a dense feedforward to embed all features, and using an attention block to tokenize each individual feature. In this way, the embedding block can handle features in a more flexible manner while controlling computational complexity and memory usage. + +# C. Hyperparameter Settings + +All common hyperparameters of APT are directly inherited from TabPFN and not tuned, including learning rate $10^{-4}$ , number of blocks 12, hidden dimensions 512, hidden feedforward dimensions 1024, number of heads 4, effective batch size (batch size per step $\times$ number of gradient accumulation steps) 64, total number of training datasets (number of epochs $\times$ steps per epoch $\times$ number of datasets per step) 6, 400, 000, as well as all data generator hyperparameters. For more details on the data generator hyperparameters, see the code repository in our supplementary material. + +# D. More Results + +We list the performance of top algorithms on small classification datasets in Table 5. Standard deviations are calculated across 5 different splits. + +Table 5. The ROC-AUC of top algorithms on the 35 small datasets in OpenML-CC18. + +
LightGBMXGBoostSVMTabPFNCatBoostAPT
mfeat-fourier.981 ± .004.982 ± .004.982 ± .004.985 ± .002.984 ± .002.983 ± .003
breast-w.993 ± .006.993 ± .006.995 ± .007.997 ± .003.996 ± .005.997 ± .003
mfeat-karhunen.999 ± .001.999 ± .0011 ± 0.999 ± 0.999 ± 01 ± 0
mfeat-morphological.959 ± .004.961 ± .002.965 ± .006.967 ± .003.964 ± .003.966 ± .006
mfeat-zernike.970 ± .004.973 ± .004.992 ± .003.982 ± .001.974 ± .003.977 ± .003
cmc.751 ± .036.758 ± .036.690 ± .020.736 ± .031.758 ± .037.739 ± .026
credit-approval.931 ± .030.920 ± .022.912 ± .024.928 ± .029.931 ± .030.930 ± .022
credit-g.809 ± .018.824 ± .028.816 ± .020.835 ± .018.816 ± .025.846 ± .024
diabetes.821 ± .027.812 ± .037.811 ± .050.817 ± .026.827 ± .025.824 ± .016
tic-tac-toe1 ± 01 ± 01 ± 0.993 ± .0031 ± 0.997 ± .002
vehicle.936 ± .009.945 ± .008.965 ± .011.965 ± .005.941 ± .008.961 ± .008
eucalyptus.900 ± .022.894 ± .024.874 ± .009.908 ± .013.905 ± .019.912 ± .017
analcatdata_authorship1 ± 01 ± 01 ± 01 ± 01 ± 01 ± 0
pc4.953 ± .008.954 ± .012.907 ± .058.957 ± .013.961 ± .011.964 ± .016
pc3.814 ± .031.831 ± .048.706 ± .055.848 ± .044.829 ± .042.865 ± .032
kc2.887 ± .060.862 ± .102.881 ± .052.875 ± .079.885 ± .084.896 ± .087
blood-transfusion-service-center.740 ± .085.722 ± .068.705 ± .075.750 ± .082.732 ± .077.751 ± .086
cnae-9.981 ± .005.994 ± .005.998 ± .001.812 ± .032.991 ± .005.901 ± .014
ilpd.767 ± .067.751 ± .038.628 ± .085.792 ± .046.787 ± .059.808 ± .035
wdbc.993 ± .006.989 ± .007.998 ± .003.997 ± .003.993 ± .003.997 ± .004
dresses-sales.685 ± .028.618 ± .045.669 ± .027.552 ± .056.637 ± .051.617 ± .049
MiceProtein1 ± 01 ± 01 ± 01 ± 01 ± 01 ± 0
steel-plates-fault.975 ± .003.979 ± .003.964 ± .006.970 ± .005.978 ± .003.969 ± .006
climate-model-simulation-crashes.944 ± .043.936 ± .052.951 ± .070.960 ± .053.949 ± .044.960 ± .058
balance-scale.970 ± .027.998 ± .003.994 ± .006.997 ± .004.949 ± .014.998 ± .003
mfeat-factors.999 ± .001.999 ± .001.999 ± .001.999 ± .001.999 ± 0.999 ± .001
vowel.999 ± .001.999 ± .001.999 ± .0011 ± 01 ± 01 ± 0
analcatdata_dmft.595 ± .032.597 ± .029.601 ± .033.577 ± .044.582 ± .027.593 ± .040
pc1.901 ± .065.917 ± .063.802 ± .127.917 ± .059.916 ± .058.942 ± .041
banknote-authentication1 ± 01 ± 01 ± 01 ± 01 ± 01 ± 0
qsar-biodeg.934 ± .015.925 ± .012.932 ± .017.944 ± .016.935 ± .017.944 ± .013
semeion.998 ± .001.999 ± .001.999 ± 0.984 ± .004.999 ± .001.980 ± .004
cylinder-bands.898 ± .041.873 ± .036.913 ± .035.911 ± .021.904 ± .044.913 ± .031
car1 ± 01 ± 01 ± 0.999 ± .0011 ± 0.997 ± .005
mfeat-pixel.999 ± 01 ± 01 ± 0.999 ± 01 ± 0.999 ± 0
\ No newline at end of file diff --git a/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/images.zip b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e4fce49ecaa5cca04b60a01ead25702b49712cc2 --- /dev/null +++ b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d873cde2e5f485cd0dce09da5717c5351c4a6653f09e5e835fcac6d627a57269 +size 705988 diff --git a/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/layout.json b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6d9395b89844090627779ca77c50ac0532953a43 --- /dev/null +++ b/ICML/2025/Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee1fa0c19d5a0d5bfc0fcf26bdd67aa0ed14c81038687c0f7bbfd24972225070 +size 525287 diff --git a/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_content_list.json b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..88ade2d96665d25f758607c07867648809a7d9e0 --- /dev/null +++ b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:deaf3d5130c54c56c0b482f41e9d59ae251539c37722567da6c374764c737892 +size 128152 diff --git a/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_model.json b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a7abab99ba65673f435319452925ee8a14d4d98a --- /dev/null +++ b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6232ec0ecaacd2e5ba0a0f7a0a9b4f0dca6fe2c2a8e1c922f2819eda64ec3088 +size 162171 diff --git a/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_origin.pdf b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e260a17bcb936b6fb4d21c68d3ef6f42f0cf8ece --- /dev/null +++ b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/607f6af7-947e-47ac-8e4d-68bda5925360_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eedc6a5121f1b70a55b96ab918cd444f5cfa0b77b004285fe1e49e79a3d56e7c +size 5663905 diff --git a/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/full.md b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ad111128a7b2b2efa4c30ff6d72c6d0361ff0ac7 --- /dev/null +++ b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/full.md @@ -0,0 +1,574 @@ +# ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think + +Tao Feng1 Wei Li1* Didi Zhu2 Hangjie Yuan2 Wendi Zheng1 Dan Zhang1 Jie Tang1 https://zeroflow-bench.github.io/ + +# Abstract + +Backpropagation provides a generalized configuration for overcoming catastrophic forgetting. Optimizers such as SGD and Adam are commonly used for weight updates in continual learning and continual pre-training. However, access to gradient information is not always feasible in practice due to black-box APIs, hardware constraints, or non-differentiable systems, a challenge we refer to as the gradient bans. To bridge this gap, we introduce ZeroFlow, the first benchmark designed to evaluate gradient-free optimization algorithms for overcoming forgetting. ZeroFlow examines a suite of forward pass-based methods across various algorithms, forgetting scenarios, and datasets. Our results show that forward passes alone can be sufficient to mitigate forgetting. We uncover novel optimization principles that highlight the potential of forward pass-based methods in mitigating forgetting, managing task conflicts, and reducing memory demands. Additionally, we propose new enhancements that further improve forgetting resistance using only forward passes. This work provides essential tools and insights to advance the development of forward-pass-based methods for continual learning. + +# 1. Introduction + +Catastrophic forgetting remains one of the major challenges on the path to artificial general intelligence (AGI) (Hadsell et al., 2020; Zhou et al., 2023b), i.e., models tend to forget previously learned tasks when trained on new ones on time-evolving data flow (Feng et al., 2022b). This phenomenon is commonly seen across various tasks, including continual learning (CL) (Wang et al., 2023), fine-tuning of foundation models (FMs) (Sun et al., 2025; Yuan et al., 2024), and + +*Core contribution 1Tsinghua University 2Zhejiang University. Correspondence to: Jie Tang . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +![](images/c1e23d5cceabac6ba75abc3d33308b500d6d5038c3b6dc8fcbbde45cb156fb40.jpg) +Figure 1: Illustrations of ZeroFlow. New tasks (or downstream tasks) arrive sequentially, the gradient bans block the model from learning and memorizing using backpropagation. ZeroFlow overcome this issue via forward passes. + +continual pre-training (CPT) (Shi et al., 2024; Zhu et al., 2024b), etc. Among them, optimization algorithms play a crucial role, e.g., SGD has become the default choice during CL (van de Ven et al., 2022), while Adam is frequently seen in fine-tuning FMs (Luo et al., 2023; Zhu et al., 2024a). These optimization algorithms in tandem with various methods (ranging from regularization and rehearsal strategies to architectural changes) rely on gradient information to avoid forgetting (Zhou et al., 2023c; Bian et al., 2024). Nonetheless, in real-world scenarios, gradient information is not always available or computable (i.e., the gradient bans), like, Scenario i: large language models as a service (LLMaaS) and black-box APIs. Scenario ii: hardware systems that do not support principled backpropagation. Scenario iii: AI for science with non-differentiable underlying systems. + +In other words, Scenario $i$ implies that pretrained models are monetized (Miura et al., 2024) (model owners do not publicly release their pretrained models but instead the service), i.e., only the inputs and outputs are accessible (Gan et al., 2023; Sun et al., 2022). Scenarios ii/iii implies that the limitations prevent or restrict the execution of backpropagation (Lillicrap et al., 2020), i.e., extremely high memory demands (Mangrulkar et al., 2022), unsupported systems and hardware (Jabri & Flower, 1992), or non-differentiable functions, etc (Tavanaei et al., 2019; Gu et al., 2021). The above means that typical methods for overcoming forgetting are not available because backpropagation is banned, as Figure 1. This yields the primary question to be explored, + +![](images/b50291f69431b08b55045a065c2de69b4ac4accfc06e5aa7cd75b3035ad9a639.jpg) +(a) EASE on average accuracy + +![](images/1cc97c5b99e6fe073baa578a84cf6adc63a88f393515487010f47aadafc4dbd2.jpg) +(b) EASE on forgetting + +![](images/be5eec2af41e0e948a440949e55a0d851bd2a04db0905e1718cb19779e8e7153.jpg) +(c) APER on average accuracy + +![](images/478b9c7df3bf6aad93a5a91ca1171221f3004f5d37b3c5905e3457e84f700b6f.jpg) +(d) APER on forgetting +Figure 2: ZeroFlow Evaluation Results of Catastrophic Forgetting. We visualize the evaluation results of 2 models (EASE (Zhou et al., 2024b) and APER (Zhou et al., 2023a)) in several ZeroFlow dimensions (average accuracy over all tasks and a forgetting metric). For comprehensive numerical results, please refer to Table 1. + +(Q) Could we establish a benchmark under gradient bans for overcoming catastrophic forgetting, and explore the overlooked optimization principles? + +To tackle $(Q)$ , a natural idea is to use the forward pass-based method (Hinton, 2022; Baydin et al., 2022; Ren et al., 2022) instead of backpropagation to overcome forgetting. The zeroth-order (ZO) optimization methods (Flaxman et al., 2004; Nesterov & Spokoiny, 2017; Malladi et al., 2023; Ghadimi & Lan, 2013), as representative methods, are well-suited to this issue due to their relaxed information requirements, as they rely only on function values rather than gradients. Under gradient bans, DECL and DFCL (Yang et al., 2024) first attempt to overcome forgetting from a stream of APIs, but they focus on synthetic data level rather than optimization. Therefore, it remains elusive whether benchmark studies using gradient-free methods can mitigate forgetting. + +In this work, we explore several Zeroth-order optimization methods on dynamic data Flow (as shown in Figure 1), examining their performance across various forgetting scenarios, model types, and evaluation metrics. Through a detailed analysis, we reveal the overlooked potential of forward passes and various ZO methods in overcoming catastrophic forgetting. This benchmark study offers an easier way to overcome forgetting and helps reveal the pros and cons of these methods in alleviating forgetting. Extended from the gained insights, we introduce three new enhancement variants that further improve ZO optimization to overcome catastrophic forgetting. Simply put, we can mitigate forgetting more effectively and efficiently using only forward passes. + +Our rationale for choosing the ZO optimization algorithms to overcome forgetting for the following two key considerations: (i) implementation cost minimization, that is, we expect minimal modifications to existing optimizers. (ii) theory of diversity, that is, we expect to cover diverse optimization methods. These considerations ensure that our benchmark is comprehensive and simplified. And, an ap + +pealing property is that we need only forward passes to be enough to overcome forgetting. Maybe, once is all it takes! + +To sum up, our contributions are listed below, + +(i) We propose the first benchmark ZeroFlow for overcoming forgetting under gradient bans. This benchmark includes our investigations into 7 forward pass optimization algorithms, several forgetting scenarios and datasets with varying complexity, and task sequences (as Figure 2). +(ii) Through this benchmark, we uncover overlooked optimization principles and insights into how forward passes can mitigate forgetting. These include the role of forward passes in managing task conflicts and the trade-offs between forgetting and memory efficiency. We proved that catastrophic forgetting can be overcome in an easier way! +(iii) Apart from a comprehensive evaluation of catastrophic forgetting, we introduce three enhancement techniques, which further improve the performance and efficiency of just forward passes to overcome forgetting. + +# 2. Literatures + +Catastrophic forgetting. Catastrophic forgetting occurs across various tasks, including CL, fine-tuning of FMs, and CPT (Zhou et al., 2023b; Wang et al., 2023; Zhuang et al., 2022a; Luo et al., 2023). To mitigate this issue, various methods have been proposed (Aojun et al., 2025; Jeeveswaran et al., 2023; Sun et al., 2023b; Li et al., 2024). In CL, methods range from regularization and rehearsal strategies to architectural changes (Zhuang et al., 2023; Bian et al., 2024; Lu et al., 2024). Lately, pre-trained models (PTM) further advanced these methods due to their strong generalization (Yuan et al., 2022; Feng et al., 2022a), as seen in PTM-based CL (Zhou et al., 2024a). All these methods share a common goal: achieving an optimal balance between learning plasticity and memory stability (Wang et al., 2023). In FMs, catastrophic forgetting often arises from overfitting to small fine-tuning datasets during CPT or + +fine-tuning (Luo et al., 2023; Zhu et al., 2024a). Common techniques to address this include learning rate adjustment, parameter-efficient fine-tuning, mixed data strategies, and instruction tuning (Luo et al., 2023; Zhang et al., 2025). Additionally, as foundational models increasingly gain multimodal capabilities, the complexity of catastrophic forgetting also intensifies (Zhao et al., 2024a; Zhu et al., 2024a). + +Optimization for catastrophic forgetting. Two broad categories of optimization methods exist for overcoming forgetting, (i) Standard Optimization. SGD and the Adam family are frequently employed to investigate catastrophic forgetting (Hadsell et al., 2020; Masana et al., 2022). For instance, in CL, various CL methods predominantly utilize the SGD optimizer for standard evaluations (van de Ven et al., 2022; Sun et al., 2023a; Zhou et al., 2024c). In fine-tuning the LLM, the Adam series is commonly used to observe forgetting phenomena (Luo et al., 2023; Zhu et al., 2024a). Some works explored orthogonal spaces with these standard optimizers to alleviate forgetting (Lopez-Paz & Ranzato, 2017; Feng et al., 2022c; Saha et al., 2020), such as OGD (Farajtabar et al., 2020), and GPM (Saha et al., 2020). Moreover, other works (Farajtabar et al., 2020; Chaudhry et al., 2018; Lopez-Paz & Ranzato, 2017) modified the gradients in the standard optimization process to align the learning spaces of new and old tasks, such as Uni-Grad (Li et al., 2024). The core of these efforts (Deng et al., 2021; Shi et al., 2021) is to find an equilibrium between learning and forgetting in optimization. (ii) Sharpness-aware Optimization. This series of methods (He et al., 2019; Foret et al., 2020; Zhong et al., 2022; Zhuang et al., 2022b) has gained attention due to the effectiveness of the flat minimum in mitigating forgetting (Li et al., 2024; Kong et al., 2023; Cha et al., 2021; Mehta et al., 2023). Methods such as FS-DPGM (Deng et al., 2021), F2M (Shi et al., 2021), DFGP (Yang et al., 2023), SAM-CL (Tung et al., 2023) overcome forgetting in the flatness areas of different configurations. C-Flat (Bian et al., 2024) proposed a CL-friendly general optimization framework, that holds promise as a baseline optimizer for overcoming forgetting. + +Our work. The works mentioned above are all rooted in a gradient feedback mechanism. Such mechanisms are powerless against catastrophic forgetting without explicit gradient information. Our work overcomes forgetting only via forward pass instead of gradient feedback. + +# 3. Exploring Zeroth-Order Optimization to Overcome Forgetting + +# C.1. Zeroth-Order Optimization + +Zeroth-order (ZO) optimization has been extensively studied over the years within the realms of numerical computation and approximation algorithms. It functions as an alterna- + +# Algorithm 1 Genetic formulation of ZO optimization + +Require: Initialized model parameters $\theta_0\in \Theta \subseteq \mathbb{R}^d$ training dataset $\mathcal{D} = \{(x_i,y_i)\}_{i = 1}^m\in \mathcal{X}\times \mathcal{Y}$ , empirical loss function $\mathcal{L}$ , learning rate $\eta_t$ , gradient perturbation vector $\xi$ , and descent direction computation $\phi (\cdot)$ + +1: while $\theta_t$ not converged do + +2: Sample mini-batch $\mathcal{B}$ from $\mathcal{D}$ + +3: Step 1. ZO gradient estimation: + +4: $\hat{\mathbf{g}}_t = \hat{\nabla}\mathcal{L}(\theta ,\xi ;\mathcal{B})$ + +5: Step 2. Descent direction computation: + +6: $\mathbf{h}_t = \phi (\{\hat{\mathbf{g}}_i\}_{i = 1}^t)$ + +7: Step 3. Parameter updating: + +8: $\theta_{t + 1} = \theta_t - \eta_t\cdot \mathbf{h}_t$ + +9: $t = t + 1$ + +10: end while + +Ensure: Updated model $\theta_t$ + +tive solution for estimating descent directions in scenarios where first-order (FO) gradients are either inaccessible or infeasible to compute. Considering a deep learning model parameterized with $\theta \in \Theta \subseteq \mathbb{R}^d$ , and given a mini-batch $\mathcal{B}$ extracted from the training dataset $D = \{(x_i, y_i)\}_{i=1}^m$ . Let $L(\theta; \mathcal{B})$ denote the empirical loss, then the genetic formulation of ZO optimization follows Algorithm 1. + +1) ZO gradient estimation. Randomized Gradient Estimation (RGE (Nesterov & Spokoiny, 2017)) and Coordinate-wise Gradient Estimation (CGE (Berahas et al., 2022)) perturb the model using $\xi$ , which is generated either from a random unknown distribution (in RGE) or by modifying individual coordinates (in CGE), and then observe the changes in the loss function $\mathcal{L}$ after each perturbation, step by step, to provide a reliable gradient estimate. However, due to their reliance on slow single-direction perturbation, these methods are not well-suited for deep learning tasks, as performing a full perturbation in high-dimensional parameter spaces is time-consuming. For instance, typical vision models like ResNet trained on ImageNet have over 25 million parameters. Performing per-dimension perturbations over such a large parameter space renders ZO-based querying highly inefficient. Standard Simultaneous Perturbation Stochastic Approximation (SPSA(Spall, 1992)) improves efficiency by generating pairs of symmetric vectors and perturbing in multiple directions simultaneously, as follows, + +$$ +\hat {\nabla} L (\theta , \xi ; \mathcal {B}) = \frac {L (\theta + \epsilon \xi ; \mathcal {B}) - L (\theta - \epsilon \xi ; \mathcal {B})}{2 \epsilon} \xi^ {- 1}. \tag {1} +$$ + +Where $\epsilon$ is a positive scalar and $\xi$ is recommended to follow a symmetric distribution with finite inverse moments (e.g., the Rademacher distribution). The symmetric distribution ensures unbiased exploration of perturbations in both positive and negative directions of parameters at each step. And the finite inverse moments property guarantees that the steps + +are well-controlled, avoiding excessively large steps due to $\xi^{-1}$ drawn from the distribution (e.g., $\mathbb{E}[1 / |\xi|^p]$ for some large p), which would otherwise lead to an unstable optimization process. In practical implementations for models with a large number of parameters (e.g., MeZO (Malladi et al., 2023) in LLMs (Zhao et al., 2024b)), Gaussian noise with zero mean induces substantial perturbations, thereby enhancing exploration across the parameter space and facilitating the escape from local minima. This methodology achieves gradient estimation with only two objective function evaluations, rendering its computational cost independent of input dimensionality. Such computational efficiency has established SPSA as a preferred method for addressing the complexities of high-dimensional deep learning tasks. While increasing $q$ in $q$ -SPSA can improve stability in the update direction, setting $q = 1$ is sufficient for pretrained LLMs (Malladi et al., 2023). + +2) Descent direction computation. In unconstrained optimization for deep learning, the last gradients $h_t$ generally coincide with the estimated ZO gradients $\hat{g}_t$ (e.g., ZOSGD (Ghadimi & Lan, 2013), ZO-SCD (Lian et al., 2016)). To reduce approximation errors, ZO-SGD-Sign (Liu et al., 2019) applies an element-wise sign(·) operation. Additionally, ZO-SVRG (Liu et al., 2018), inspired by variance reduction methods in first-order optimization, adjusts the update step by using estimated gradients from previous training examples. CARS (Kim et al., 2021) adaptively selects the smallest function value in each iteration, which helps maintain monotonicity during optimization. + +3) Parameter updating. Normally, for most ZO methods, parameters are updated in a similar way with FO optimizers, and the learning rate $\eta_t$ is set to constant. Except for the special design for achieving some constraint prerequisites, several methods make an effort to strike a balance between converge speed and accuracy. ZO-AdaMM (Chen et al., 2019) uses an adaptive learning rate and refines gradient estimation by incorporating momentum from past information. This approach is particularly effective in handling complex and evolving optimization landscapes, where the function's behavior may vary over time or be hard to capture with straightforward gradient approximations. + +# C.2. Zeroth-Order Optimization for Catastrophic Forgetting + +Rationality. ZO optimization leverages the function values of the forward passes to approximate FO gradients, making it feasible to avoid gradient bans. This feature enables seamless integration into common forgetting scenarios, such as CL. We explore it in the following three categories. + +i) Memory-based methods maintain a repository of exemplars from previous tasks and dynamically adjust the overall loss function by combining these stored samples with new + +![](images/00583251b73d6041ebf98e6423039bfd7646175fd5a57fbed24003e8367447b0.jpg) +(a) FO-Adam + +![](images/f7ec10cedc69d78b54750e7906824d335aedefb90ec022a5781a5bf7f2c94d25.jpg) +(b) ZO-Adam +Figure 3: Trajectory of FO and ZO Optimization during Overcoming Forgetting. The trajectory is taken when using the total loss from both tasks (cyan) and the gradients from each individual task at fixed points during optimization (red and orange). The trends of ZO optimization hold the potential to manage forgetting and learning. + +data based on learning progress. + +$$ +\mathcal {L} _ {\text {t o t a l}} = \frac {1}{N _ {\text {c o n t e x t}}} \mathcal {L} _ {\text {c u r}} + \left(1 - \frac {1}{N _ {\text {c o n t e x t}}}\right) \mathcal {L} _ {\text {r e p l a y}}, \tag {2} +$$ + +where $N_{\text{context}}$ represents the number of contexts encountered so far. In Experience Replay (Rolnick et al., 2019), both components use classification loss based on their respective data distributions, so ZO gradients can be expressed as $\hat{\nabla}\mathcal{L}_{cur}$ and $\hat{\nabla}\mathcal{L}_{repl}$ respectively. However, in the emerging generative replay workflows (Shin et al., 2017), Equation (2) may introduce additional loss for the training of generators. In this case, the generator can be trained using standard backpropagation or in conjunction with ZO training without FO gradients. + +ii) Extension-based methods can be divided into fixed and dynamic architectures. Fixed architectures separate model parameters for specialized context learning, while dynamic architectures expand the model size during adaptation. Both approaches mitigate forgetting from the model's perspective and enable model-agnostic ZO solutions. +iii) Regularization-based methods penalize significant changes to parameters important for old tasks or maintain the output distribution with respect to previous inputs. The template loss function is given by + +$$ +\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {c u r}} + \alpha \mathcal {L} _ {\text {r e g}}, \tag {3} +$$ + +where $\alpha$ is a coefficient hyperparameter. The FO gradients from dual objectives (Lcur for adaptation and Lreg for preservation) drive optimization toward their respective optima, achieving inter-task equilibrium. Notably, ZO gradient estimates, though obtained in a noisy environment, exhibit comparable optimization behavior. + +As shown in Figure 3, we visualize and compare the optimization trajectories of ZO and FO methods under the learning-memory trade-off dynamics in continual learning. The objective is defined over two-dimensional parameters, with axes specified in Appendix A.2. The striking similarity + +Table 1: ZeroFlow Evaluation on CIFAR-100, ImageNet-A, CUB and OmniBenchmark. This table compares average accuracy, final accuracy, and forgetting measures of 2 models, and 4 forgetting scenarios. More intuitive trend please see Figure 2. All ZO optimizations use a query budget of $q = 1$ . Bold indicates the best accuracy achieved among ZeroFlow. + +
MethodOptimizerStrategyCIFAR-100CUBImageNet-AOmniBenchmark
AvgLastFgtAvgLastFgtAvgLastFgtAvgLastFgt
EASESGDFO91.2385.967.3289.3183.769.6161.2451.0210.8474.7367.4015.11
ZO78.6268.4015.6488.9482.918.0857.8748.3211.0873.5066.6017.78
Sign83.2175.8810.5889.8184.618.1059.1549.3111.7773.8166.7517.21
Conserve82.2275.888.9389.2183.4210.3158.6148.5812.4177.0770.7314.87
AdamFO90.5684.827.6984.4477.1010.5159.6047.2019.0874.2766.2815.63
ZO83.3676.0910.1689.4984.148.6758.9048.7212.3576.1569.6915.87
Sign83.1476.0110.4489.8284.658.2158.9748.8512.2077.1271.0814.68
Conserve82.1575.659.2489.8284.618.4059.2348.8512.8177.1970.9914.68
-Forward82.2676.058.7489.2683.679.3557.7648.1911.0377.0070.7414.99
APERSGDFO82.3176.217.3390.5685.165.1959.5049.379.9178.6172.217.87
ZO82.3376.217.3690.5385.205.1259.5849.5110.0278.6072.217.85
Sign82.3276.237.3290.4285.284.9659.6549.779.8978.6072.267.78
Conserve82.3176.217.3390.6285.285.0559.6849.7010.1878.6172.217.87
AdamFO82.3176.217.3390.5685.165.1959.6049.7710.0676.6072.217.85
ZO82.1275.457.4790.3384.316.0158.8949.249.3278.4472.107.87
Sign82.0175.607.3889.8684.185.9957.8248.129.7278.2672.057.75
Conserve82.2175.987.3489.9684.485.9057.8647.5310.0078.6172.217.87
-Forward82.3276.227.3289.4783.386.2458.2547.999.6277.6171.457.87
+ +between the two trajectories highlights the potential of ZO optimization in effectively balancing learning and forgetting, thereby motivating our further investigation. + +Potential. The intrinsic optimization mechanism of ZO exhibits particular promise in continual learning scenarios. Intuitively, ZO perturbs parameters using random or coordinate-wise directional vectors and observes changes in the evaluation function, effectively optimizing within a noisy environment. This approach enables small parameter modifications to yield significant impacts on target objectives, resulting in distinctive gradient estimations compared to FO optimization. Notably, while ZO methods do not explicitly incorporate sharpness regularization terms, they naturally facilitate the exploration of flat regions in parameter space. The influence of optimizing flat regions with ZO approaches in continual learning can be summarized in two main manifolds: (i) For previous tasks, the noise-induced parameter robustness enhances resilience against perturbations from new task adaptation; (ii) For new tasks, empirical evidence suggests that convergence to flat minima generally leads to lower generalization error. + +Risk. Although ZO demonstrates superior generalization abilities, its practical performance is limited by optimization strategies and the complexity of the optimization setting. Despite significant efforts to reduce convergence error, optimizing models from scratch in high-dimensional space remains challenging due to slow convergence speed (proportional to the parameter dimension $d$ ). For instance, origin CGE-based ZO training for a model with 12k parameters + +takes 70.32 hours in DeepZero (Chen et al., 2023). Such computational demands render from scratch training impractical for high-dimensional CL models, particularly those employing expansion-based architectures. Consequently, we focus our discussion on leveraging ZO optimization to overcome forgetting within a pre-training context. + +# 4. ZeroFlow Benchmark + +This section delves into the empirical performance of ZO optimization in overcoming catastrophic forgetting. Our ZeroFlow benchmark evaluates average performance across incremental stages, final-stage accuracy, forgetting, and efficiency, while accounting for dataset complexity and model diversity. + +# D.1. Benchmark Setups + +Forgetting scenarios, schemes, and models. We conduct evaluations under a standard catastrophic forgetting setting, namely class incremental learning. For this purpose, we investigate two state-of-the-art schemes: EASE and APER. Both models are initialized with ViT-B/16 pretrained on ImageNet-1K (IN1K), and are subsequently fine-tuned on four downstream tasks of varying complexity—ranging from standard benchmarks such as CIFAR-100 and CUB, to more challenging datasets like ImageNet-A and OmniBenchmark, which exhibit a large domain gap from the pretraining distribution (Zhou et al., 2024a;c). Following (Zhou et al., 2023a), each dataset is evenly split into 10 + +![](images/aea882ca111bc58063d3c9931a7becce380c240c4fccf2d317319dd0709625d6.jpg) + +![](images/4fe76d41afe093e69a952f857c1b5a54a07747ca0aa666c8bf400c0bc3c52b3d.jpg) + +![](images/5b68a11d3549a1f4da4dac9f6152d83dcf8e5595cf135a4ff3869ba0a6ca4ffe.jpg) + +![](images/6d8ea6c43c8f7c2c11698b4fb47b034301844abc67d6f39df4d2238fe6ef67b6.jpg) + +![](images/7dc7a4a88e5dd2d92ccecb840fb6310c6eea85cccab2390c9e3c37d5cd86f02c.jpg) + +![](images/6f79fb3ac83f60da1b5f02912be6ccaaff63971f3f8ae4946226a6a490c6fd9c.jpg) +(a) FO-Adam +(f) FO-SGD +Figure 4: The Trajectory of Different Optimization during Overcoming Forgetting. $\downarrow$ , $\downarrow$ , and $\star$ denote the minima for the new, old, and both tasks, respectively. The trajectory is taken when using the total loss from both tasks (cyan). + +![](images/59a1c8acc85b22aac6125fe1c7b96564ac1b1dedff876a4a991a4263735705c6.jpg) +(b) ZO-Adam +(g) ZO-SGD + +![](images/e435c62a88be1af4ac72b06b6dc413eee178316842f6c48951e34eb73bfaf64a.jpg) +(c) ZO-Adam $(q = 4)$ +(h) ZO-SGD $(q = 4)$ + +![](images/e5dd1cb9b165846a7b65abe5f19ce7f4b5aecb6decc74ce2bc63947c58e25f3a.jpg) +(d) ZO-Adam-Sign +(i) ZO-SGD-Sign + +![](images/5e586783c899c6f98abdacc5b814d01b6b0dd8837bab56fc9ca153308d7a5289.jpg) +(e) ZO-Adam-Conserve +(j) ZO-SGD-Conserve + +incremental tasks by class. For instance, OmniBenchmark contains 300 classes, with 30 classes introduced at each stage. No memory is permitted for storing past examples. + +Benchmark setup and details. To evaluate the application of ZeroFlow in forgetting scenarios, we include the methods described in Section C.1, specifically ZO (Ghadimi & Lan, 2013), Sign (Liu et al., 2019), and Conserve (Kim et al., 2021; Zhang et al., 2024), in comparison with their FO counterparts using SGD and Adam optimizers (Chen et al., 2019). Additionally, as highlighted in (Zhang et al., 2024), Forward-Grad (Baydin et al., 2022) which relies on forward mode automatic differentiation, potentially becomes a missing but competitive forward pass baseline. In a nutshell, ZeroFlow covers 7 forward pass-based methods: ZO-SGD, ZO-SGD-Sign, ZO-SGD-Conserve, ZO-Adam, ZO-Adam-Sign, ZO-Adam-Conserve, Forward-Grad. Unless otherwise specified, the query budget is fixed to 1 for efficiency. Notably, here we consider generating one set of perturbation vectors for the entire model as one query. In other words, we usually require 2 forward propagations for two-point finite difference gradient estimations. + +Evaluation metrics. Overall, we adopt two categories of evaluation metrics in ZeroFlow: accuracy and efficiency. The accuracy metrics include average accuracy across all tasks, final-task accuracy, and a forgetting score (BWT in Appendix B.5). The efficiency metrics encompass memory usage (GPU), query budget, and runtime. Together, these metrics provide insights into the resource demands of ZO optimization for mitigating forgetting. + +# D.2. Evaluation Results of ZeroFlow + +ZeroFlow evaluation on continual learning. In Table 1, we evaluate the performance of different BP-free and BP- + +based (FO-SGD and FO-Adam) methods in a typical forgetting scenario (continual learning). We use two SOTA models as examples (EASE (Zhou et al., 2024b) and APER (Zhou et al., 2023a)) and investigate SGD and Adam optimizers, 7 forward pass-based methods, and four commonly used datasets. Several observations are listed below, + +First, the performance of ZO method is comparable to or even surpasses that of the FO method across almost all forgetting metrics and datasets. However, as will be shown later, the FO method requires significantly more memory overhead. This suggests that forward passes alone can effectively mitigate forgetting, and the ZO method offers a simpler, more efficient alternative. In some cases, such as with ZO-Adam and ZO-SGD on OmniBenchmark, ZO methods even outperform FO methods. + +Second, Forward Grad demonstrates competitive performance when compared to other ZO and FO methods. Unlike typical ZO methods, Forward Grad utilizes a unique forward pass mechanism, making it a promising baseline for future studies. A more intuitive trend in overcoming forgetting refers to Figure 6. These observations motivate further exploration into the effectiveness of ZO method. + +ZeroFlow helps manage memory and runtime. In Table 2, we compare the efficiency of various ZO and FO optimizers in mitigating catastrophic forgetting, focusing on two key aspects: memory cost (in GB) and runtime cost (in seconds). First, naive ZO optimization reduces memory usage by approximately fivefold compared to FO optimization. Moreover, ZO methods reduce runtime per iteration by around $50\%$ relative to FO, significantly improving their practicality for overcoming forgetting. Notably, we regenerate the perturbation vectors for model parameters iteratively by storing random seeds. This degrades the vec + +Table 2: Memory Cost (GB) and Runtime Cost (s) of Each Optimizer on 3 Forgetting Scenarios. The per-epoch runtime in seconds (s). ZO-SGD w/ query budget $q = 1,4$ and all other optimizers w/ query budget $q = 1$ . + +
OptimizerMemory ↓ CIFAR-100CUBImageNet-A
FO-SGD12.08 GB59.3s16.1s12.2s
ZO-SGD (q=1)2.41 GB32.4s8.3s6.8s
ZO-SGD (q=4)2.41 GB111.7s28.7s18.0s
ZO-SGD-Sign2.41 GB32.4s8.3s6.8s
ZO-SGD-Conserve2.41 GB70.1s15.7s12.4s
Forward-Grad3.94 GB45.9s11.1s9.0s
+ +![](images/fcfbbe0dd4a4d770e907df05bf4f0883d459154e5e0c51ebced1100d9d18ebd3.jpg) +Figure 5: Performance Comparison under Different Query Numbers. Both optimizers show improved performance as query numbers increase. + +tor granularity from full-model to per-layer level, thereby further reducing the memory required for forward evaluations in ZeroFlow, at the cost of additional runtime for regenerating the vectors. Second, the ZO and Sign variants demonstrate comparable efficiency in both memory and runtime. Although increasing the number of queries can impact runtime efficiency, it does not compromise memory advantages. Third, Conserve also demonstrates efficient memory management, although its runtime is approximately twice as long as that of naive ZO. This may partly explain its stronger performance in some scenarios, as shown in Table 1. Finally, the Forward Gradient method requires more memory than other ZO-based approaches because it involves computing gradients via the Jacobian-vector product (JVP), which necessitates storing all intermediate activations during the forward pass. For models like ViT, this includes large attention maps and other intermediate representations. In contrast, naive ZO methods only require two forward passes on perturbed inputs and avoid storing these intermediate values, resulting in much lower memory usage. + +Trade-off between performance and query number. As shown in Figure 5, we investigate the impact of query numbers on optimization performance, comparing SGD and Adam optimizers in the zeroth-order setting. Both optimizers demonstrate improved performance as query numbers + +![](images/dc725016808a87ba3c6de24e70aee3f2f7962edd2a55d8703f867f4aec982826.jpg) +(a) EASE on last accuracy + +![](images/bc9d4bacc7fc79a603bda4863721638eb9c125cc526b6c95cfdad3b2250d2c90.jpg) +(b) APER on last accuracy +Figure 6: ZeroFlow Evaluation Results for Forgetting. We visualize the evaluation of 2 models in last-task accuracy. + +increase across $\{1,2,4,8,16,32\}$ , suggesting that additional function evaluations enable more accurate gradient estimation. The results suggest that in scenarios where function evaluation costs are manageable, higher query numbers can yield substantially better performance, with Adam being particularly effective at leveraging the additional gradient information for enhanced optimization outcomes. + +# 5. Insights and Discussions + +As shown in Figure 4, we visualized the optimization trajectories of both forward passes and backpropagation methods. Our analysis reveals several key insights: + +Convergence behavior across optimizer families. In Figure 4, both FO and ZO methods demonstrate successful convergence to the minima of new and old knowledge spaces, regardless of whether they use Adam or SGD as their base optimizer. This convergence consistency validates our theoretical foundation. + +Distinct trajectory characteristics of FO and ZO. FO approaches (Figure 4a, 4f) show smoother optimization paths due to their access to exact gradient information. In contrast, ZO methods demonstrate varying degrees of exploration behavior through trajectory jitter. This exploration pattern is particularly pronounced in ZO-Adam variants compared to ZO-SGD variants, indicating that the base optimizer choice significantly influences the exploration-exploitation trade-off during optimization. + +Path characteristics in ZO optimization. Comparing base ZO methods with their $q = 4$ counterparts (Figure 4b vs 4c, Figure 4g vs 4h), we observe that increasing query numbers leads to smoother trajectories, suggesting that more queries help provide more stable gradient estimates. The Sign variants (Figure 4d, 4i) demonstrate more pronounced oscillations in their trajectories, particularly visible in the ZO-Adam-Sign case. In contrast, the conservative variants (Figure 4e, 4j) maintain relatively stable paths that better balance between the old and new task minima. + +Distinct characteristics between optimizer families. Adam-based approaches (Figure 4a-4e) demonstrate more + +![](images/f42784ad90c987ecc95217537ce40eb6f000cd257ade84b1c90b5be63a810201.jpg) +Figure 7: Effectiveness of Hybrid ZO in Overcoming Forgetting. In Hybrid ZO, backward benefits from forward passes. + +Table 3: Effectiveness of Historical Estimation in Mitigating Forgetting. Proportion of $0\%$ denotes that the plain optimizer ZO-SGD. Bold indicates the best performance. + +
MetricsProportion
0%20%40%60%80%
Avg57.8758.9058.7658.3457.83
Last48.3249.0448.8448.4248.10
Fgt11.0811.7911.7811.6011.57
+ +oscillatory trajectories with frequent direction adjustments, indicating a more dynamic exploration of the loss landscape. In contrast, SGD-based methods (Figure 4f-4j) exhibit smoother and more stable trajectories, suggesting a more gradual progression toward the optimization objective. These distinct optimization patterns could influence how each method balances between preserving old task knowledge and adapting to new tasks. + +# 6. New Enhancement to Mitigate Forgetting + +In ZO optimization, the estimation of the gradients relies on a finite difference of the objective function. We set query budget $q = 1$ in the benchmark for efficiency. However, limited queries cannot capture the accurate ZO directions. When the model learns tasks sequentially, the high variance inherent in ZO gradient estimation poses a critical challenge. Though increasing query numbers can stabilize the gradient estimates, it leads to prohibitive overhead Thus, exploring variance-reduced optimization algorithms is crucial for ZO-based CL. Specifically, we propose 3 enhancements to stabilize the ZO optimization process: + +Enhancement 1: Hybrid ZO to overcome forgetting. While ZO methods does not explicitly minimize sharpness, it stabilizes optimization by approximating gradients and assessing the rate of change in loss function through perturbations. This indirect approach helps reduce the curvature of the loss landscape, steering the optimization away from sharp and unstable regions. This insight motivates us to investigate Hybrid ZO method. Figure 7 illustrates results + +![](images/fbf8c675ce110f49f63e59dc53bf618a74a561c53a15088819832b0295146793.jpg) +(a) FO-SGD + +![](images/ed6a7bdf398f550aac467876c98b3334dd93d036a9120372ea33d773ac1e899d.jpg) +(b) ZO-SGD +Figure 8: Variation in Function Values of Forward Passes. Function values for new tasks is highlighted in red, old tasks is highlighted in green. + +hybrid ZO. We first use FO to coarsely optimize to a local minimum (first 140 or 160 epochs) and then refine the solution by searching for flatter regions around it using ZO (last 30 or 60 epochs). As the first two subfigures in Figure 7, ZO provides only limited gains to FO. This is because FO inherits strong generalization from the pretrained backbone but loses its generalization ability quickly after two incremental stages. In later stages, ZO helps to remedy the vulnerabilities of backbone trained by FO, leading to significant enhancements compared to the FO baseline. + +Enhancement 2: Leverage historical information to overcome forgetting. When learning new tasks, models leverage previously learned parameters while prioritizing the preservation of crucial parameters for old tasks. To mitigate interference from new tasks, we propose reweighting old task gradients with historical gradients, which can stabilize perturbations caused by low query loops in ZO optimization. Figure 8 illustrates the function value trajectories for both old and new tasks. While FO optimization shows smooth convergence toward the global optimum, ZO optimization exhibits a more volatile path. Notably, objectives related to old tasks demonstrate smaller changes in both magnitude and variance. This observation motivates us to stabilize the optimization by reducing changes to old gradients through a linear combination with historical gradients: $g_{old} = (1 - \alpha)g_{old} + \alpha g_{historical}$ , where larger $\alpha$ indicates greater reliance on historical information for stability, at the cost of reduced contrast with new task gradients. + +Table 4: Effectiveness of Sparsity-induced Estimation in Overcoming Forgetting. Proportion of $0\%$ denotes the plain ZO-SGD. Bold indicates the best performance. + +
Ratio0%10%20%30%40%50%60%70%80%90%
Avg57.8759.1759.4659.2959.3959.4559.2659.3959.3859.47
Last48.3248.5849.0548.7248.9149.2449.1149.0549.1149.24
Fgt11.0812.6512.1712.7612.5312.3712.3612.5412.4612.33
+ +Table 5: Ablation Studies on the Effectiveness of Combining Enhancements. + +
OptimizerHybridHistoricalSparsityAvgLast
FO-SGD---61.2451.02
ZO-SGD---57.8748.32
61.40(+3.53)51.34(+3.02)
58.90(+1.03)49.04(+0.72)
59.47(+1.60)49.24(+0.92)
62.07(+4.20)51.94(+3.62)
+ +In Table 3, we validate the effectiveness of historical estimation in mitigating catastrophic forgetting. Modest proportions of historical information (e.g., $20\%$ , $40\%$ , $60\%$ ) outperform ZO-SGD $(0\%)$ , effectively controlling perturbations while maintaining a low query budget $(q = 1)$ . + +Enhancement 3: Sparsity-induced estimation helps to overcome forgetting. In ZO optimization, the gradients for new tasks are often highly uncertain due to the approximation nature of the gradient estimation. To reduce this variance, we implement random sparsification by creating a seed-based mask and setting gradients outside the mask to zero. By reducing the number of non-zero gradient components, we aim to stabilize the optimization process and mitigate the noise in gradient updates. + +In Table 4, we report the performance of sparsity-induced ZO in overcoming forgetting. The sparsity level is varied in this experiments, ranging from $10\%$ to $90\%$ . We observe that the sparse technique improves the average and last accuracy across all scales, which implies that forgetting is effectively controlled. The reduction in volatility can be attributed to the sparse strategy yielding smoother gradient estimates compared to plain ZO-SGD, effectively bounding variance to a low level and thus mitigating forgetting. Moreover, the robust performance across different sparsity ratios provides strong evidence for the efficacy of variance control in addressing forgetting. + +Complementary Enhancements: The results in Table 5 demonstrate that the proposed enhancements are not mutually exclusive and can be effectively integrated. Specifically, FO training can substantially benefit from subsequent finetuning with hybrid ZO optimization, as illustrated in Figure 7. Notably, the inherent instability of ZO with large step + +fluctuations can sometimes facilitate escaping local minima and encourage broader exploration, which in turn benefits FO convergence. Furthermore, incorporating historical gradients and sparsity perturbations contributes to mitigating forgetting and stabilizing the optimization process. + +# 7. Conclusion + +This paper introduces ZeroFlow, a benchmark study that probes a series of forward pass-based methods for overcoming catastrophic forgetting. This work resorts to an easier way (no need for backpropagation and activation storage) to overcome forgetting. Concretely, our benchmarks include various forward pass-based methods, forgetting scenarios, and evaluation metrics. We also reveal the overlooked optimization principles for overcoming forgetting via forward passes. Based on these insights, we propose two easier and better enhancement to overcome forgetting and extend the application of related methods easily. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# Acknowledgments + +This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 62495063. This work was supported in part by the China Postdoctoral Science Foundation under Grant 2024M761677. + +# References + +Aojun, L., Hangjie, Y., Tao, F., and Yanan, S. Rethinking the stability-plasticity trade-off in continual learning from an architectural perspective. ICML, 2025. +Baydin, A. G., Pearlmutter, B. A., Syme, D., Wood, F., and Torr, P. Gradients without backpropagation. arXiv preprint arXiv:2202.08587, 2022. +Berahas, A. S., Cao, L., Choromanski, K., and Scheinberg, K. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, 22(2):507-560, 2022. +Bergou, E. H., Gorbunov, E., and Richtarik, P. Stochastic three points method for unconstrained smooth minimization. SIAM Journal on Optimization, 30(4):2726-2749, 2020. +Bian, A., Li, W., Yuan, H., Yu, C., Wang, M., Zhao, Z., Lu, A., Ji, P., and Feng, T. Make continual learning stronger via c-flat. NeurIPS, 2024. +Cha, S., Hsu, H., Hwang, T., Calmon, F. P., and Moon, T. Cpr: classifier-projection regularization for continual learning. *ICLR*, 2021. +Chaudhry, A., Ranzato, M., Rohrbach, M., and Elhoseiny, M. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420, 2018. +Chen, A., Zhang, Y., Jia, J., Diffenderfer, J., Liu, J., Parasyris, K., Zhang, Y., Zhang, Z., Kailkhura, B., and Liu, S. Deepzero: Scaling up zeroth-order optimization for deep model training. arXiv preprint arXiv:2310.02025, 2023. +Chen, X., Liu, S., Xu, K., Li, X., Lin, X., Hong, M., and Cox, D. Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization. NeurIPS, 32, 2019. +Deng, D., Chen, G., Hao, J., Wang, Q., and Heng, P.-A. Flattening sharpness for dynamic gradient projection memory benefits continual learning. NeurIPS, 34, 2021. +Farajtabar, M., Azizan, N., Mott, A., and Li, A. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics, pp. 3762-3773. PMLR, 2020. +Feng, T., Ji, K., Bian, A., Liu, C., and Zhang, J. Identifying players in broadcast videos using graph convolutional network. Pattern Recognition, 124:108503, 2022a. +Feng, T., Wang, M., and Yuan, H. Overcoming catastrophic forgetting in incremental object detection via elastic response distillation. In CVPR, 2022b. + +Feng, T., Yuan, H., Wang, M., Huang, Z., Bian, A., and Zhang, J. Progressive learning without forgetting. arXiv preprint arXiv:2211.15215, 2022c. +Flaxman, A. D., Kalai, A. T., and McMahan, H. B. Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007, 2004. +Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412, 2020. +Gan, W., Wan, S., and Philip, S. Y. Model-as-a-service (maas): A survey. In 2023 IEEE International Conference on Big Data (BigData), 2023. +Ghadimi, S. and Lan, G. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM journal on optimization, 2013. +Gu, J., Zhu, H., Feng, C., Jiang, Z., Chen, R., and Pan, D. L2ight: Enabling on-chip learning for optical neural networks via efficient in-situ subspace optimization. Advances in Neural Information Processing Systems, 2021. +Hadsell, R., Rao, D., Rusu, A. A., and Pascanu, R. Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences, 24(12):1028-1040, 2020. +He, H., Huang, G., and Yuan, Y. Asymmetric valleys: Beyond sharp and flat local minima. NeurIPS, 32, 2019. +Hinton, G. The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345, 2022. +Jabri, M. and Flower, B. Weight perturbation: An optimal architecture and learning technique for analog vsi feedforward and recurrent multilayer networks. IEEE Transactions on Neural Networks, 1992. +Jeeveswaran, K., Bhat, P., Zonooz, B., and Arani, E. Birt: Bio-inspired replay in vision transformers for continual learning. ICML, 2023. +Kim, B., Cai, H., McKenzie, D., and Yin, W. Curvature-aware derivative-free optimization. arXiv preprint arXiv:2109.13391, 2021. +Kong, Y., Liu, L., Chen, H., Kacprzyk, J., and Tao, D. Overcoming catastrophic forgetting in continual learning by exploring eigenvalues of hessian matrix. IEEE Transactions on Neural Networks and Learning Systems, 2023. +Li, W., Feng, T., Yuan, H., Bian, A., Du, G., Liang, S., Gan, J., and Liu, Z. Unigrad-fs: Unified gradient projection with flatter sharpness for continual learning. IEEE Transactions on Industrial Informatics, 2024. + +Lian, X., Zhang, H., Hsieh, C.-J., Huang, Y., and Liu, J. A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order. Advances in Neural Information Processing Systems, 29, 2016. +Lillicrap, T. P., Santoro, A., Marris, L., Akerman, C. J., and Hinton, G. Backpropagation and the brain. Nature Reviews Neuroscience, 2020. +Liu, B., Liu, X., Jin, X., Stone, P., and Liu, Q. Conflict-averse gradient descent for multi-task learning. *NeurIPS*, 2021. +Liu, S., Kailkhura, B., Chen, P.-Y., Ting, P., Chang, S., and Amini, L. Zeroth-order stochastic variance reduction for nonconvex optimization. Advances in Neural Information Processing Systems, 31, 2018. +Liu, S., Chen, P.-Y., Chen, X., and Hong, M. signsgd via zeroth-order oracle. In International Conference on Learning Representations, 2019. +Lopez-Paz, D. and Ranzato, M. Gradient episodic memory for continual learning. NeurIPS, 2017. +Lu, A., Feng, T., Yuan, H., Song, X., and Sun, Y. Revisiting neural networks for continual learning: An architectural perspective. *IJCAI*, 2024. +Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., and Zhang, Y. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2023. +Malladi, S., Gao, T., Nichani, E., Damian, A., Lee, J. D., Chen, D., and Arora, S. Fine-tuning large language models with just forward passes. NeurIPS, 2023. +Mangrulkar, S., Gugger, S., Debut, L., Belkada, Y., Paul, S., and Bossan, B. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft, 2022. +Masana, M., Liu, X., Twardowski, B., Menta, M., Bagdanov, A. D., and Van De Weijer, J. Class incremental learning: survey and performance evaluation on image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. +Mehta, S. V., Patil, D., Chandar, S., and Strubell, E. An empirical investigation of the role of pre-training in lifelong learning. J. Mach. Learn. Res., 24:214:1-214:50, 2023. URL https://jmlr.org/papers/v24/22-0496.html. +Miura, T., Shibahara, T., and Yanai, N. Megex: Data-free model extraction attack against gradient-based explainable ai. In Proceedings of the 2nd ACM Workshop on Secure and Trustworthy Deep Learning Systems, 2024. + +Nesterov, Y. and Spokoiny, V. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 2017. +Reddi, S. J., Kale, S., and Kumar, S. On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237, 2019. +Ren, M., Kornblith, S., Liao, R., and Hinton, G. Scaling forward gradient with local losses. arXiv preprint arXiv:2210.03310, 2022. +Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., and Wayne, G. Experience replay for continual learning. Advances in neural information processing systems, 32, 2019. +Saha, G., Garg, I., and Roy, K. Gradient projection memory for continual learning. In International Conference on Learning Representations, 2020. +Shi, G., Chen, J., Zhang, W., Zhan, L.-M., and Wu, X.-M. Overcoming catastrophic forgetting in incremental few-shot learning by finding flat minima. NeurIPS, 2021. +Shi, H., Xu, Z., Wang, H., Qin, W., Wang, W., Wang, Y., Wang, Z., Ebrahimi, S., and Wang, H. Continual learning of large language models: A comprehensive survey. arXiv preprint arXiv:2404.16789, 2024. +Shin, H., Lee, J. K., Kim, J., and Kim, J. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017. +Spall, J. C. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE transactions on automatic control, 37(3):332-341, 1992. +Sun, H.-L., Zhou, D.-W., Ye, H.-J., and Zhan, D.-C. Pilot: A pre-trained model-based continual learning toolbox. arXiv preprint arXiv:2309.07117, 2023a. +Sun, M., Wang, Y., Feng, T., Zhang, D., Zhu, Y., and Tang, J. A stronger mixture of low-rank experts for fine-tuning foundation models, 2025. +Sun, T., Shao, Y., Qian, H., Huang, X., and Qiu, X. Black-box tuning for language-model-as-a-service. In International Conference on Machine Learning, 2022. +Sun, Z., Mu, Y., and Hua, G. Regularizing second-order influences for continual learning. In CVPR, 2023b. +Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T., and Maida, A. Deep learning in spiking neural networks. Neural networks, 2019. + +Tung, L. T., Van, V. N., Hoang, P. N., and Than, K. Sharpness and gradient aware minimization for memory-based continual learning. In Proceedings of the 12th International Symposium on Information and Communication Technology, SOICT. ACM, 2023. +van de Ven, G. M., Tuytelaars, T., and Tolias, A. S. Three types of incremental learning. Nature Machine Intelligence, pp. 1185-1197, 2022. +Wang, L., Zhang, X., Su, H., and Zhu, J. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487, 2023. +Wang, L., Zhang, X., Su, H., and Zhu, J. A comprehensive survey of continual learning: Theory, method and application. TPAMI, 2024. +Yang, E., Shen, L., Wang, Z., Liu, S., Guo, G., and Wang, X. Data augmented flatness-aware gradient projection for continual learning. In IEEE/CVF International Conference on Computer Vision, 2023. +Yang, E., Wang, Z., Shen, L., Yin, N., Liu, T., Guo, G., Wang, X., and Tao, D. Continual learning from a stream of apis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. +Yuan, H., Jiang, J., Albanie, S., Feng, T., Huang, Z., Ni, D., and Tang, M. Rlip: Relational language-image pre-training for human-object interaction detection. In NeurIPS, 2022. +Yuan, H., Zhang, S., Wang, X., Wei, Y., Feng, T., Pan, Y., Zhang, Y., Liu, Z., Albanie, S., and Ni, D. Instructvideo: instructing video diffusion models with human feedback. In CVPR, 2024. +Zhang, D., Feng, T., Xue, L., Wang, Y., Dong, Y., and Tang, J. Parameter-efficient fine-tuning for foundation models. arXiv, 2025. +Zhang, Y., Li, P., Hong, J., Li, J., Zhang, Y., Zheng, W., Chen, P.-Y., Lee, J. D., Yin, W., Hong, M., Wang, Z., Liu, S., and Chen, T. Revisiting zeroth-order optimization for memory-efficient LLM fine-tuning: A benchmark. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=THPjMr2r0S. +Zhao, Z., Bai, H., Zhang, J., Zhang, Y., Zhang, K., Xu, S., Chen, D., Timofte, R., and Van Gool, L. Equivariant multi-modality image fusion. In CVPR, 2024a. +Zhao, Z., Deng, L., Bai, H., Cui, Y., Zhang, Z., Zhang, Y., Qin, H., Chen, D., Zhang, J., Wang, P., and Gool, L. V. Image fusion via vision-language model. In ICML, 2024b. + +Zhong, Q., Ding, L., Shen, L., Mi, P., Liu, J., Du, B., and Tao, D. Improving sharpness-aware minimization with fisher mask for better generalization on language models. arXiv preprint arXiv:2210.05497, 2022. +Zhou, D.-W., Cai, Z.-W., Ye, H.-J., Zhan, D.-C., and Liu, Z. Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need. arXiv preprint arXiv:2303.07338, 2023a. +Zhou, D.-W., Wang, Q.-W., Qi, Z.-H., Ye, H.-J., Zhan, D.-C., and Liu, Z. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648, 2023b. +Zhou, D.-W., Wang, Q.-W., Ye, H.-J., and Zhan, D.-C. A model or 603 exemplars: Towards memory-efficient class-incremental learning. *ICLR*, 2023c. +Zhou, D.-W., Sun, H.-L., Ning, J., Ye, H.-J., and Zhan, D.-C. Continual learning with pre-trained models: A survey. In *IJCAI*, pp. 8363-8371, 2024a. +Zhou, D.-W., Sun, H.-L., Ye, H.-J., and Zhan, D.-C. Expandable subspace ensemble for pre-trained model-based class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024b. +Zhou, D.-W., Wang, Q.-W., Qi, Z.-H., Ye, H.-J., Zhan, D.-C., and Liu, Z. Class-incremental learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024c. +Zhu, D., Sun, Z., Li, Z., Shen, T., Yan, K., Ding, S., Kuang, K., and Wu, C. Model tailor: Mitigating catastrophic forgetting in multi-modal large language models. ICML, 2024a. +Zhu, T., Qu, X., Dong, D., Ruan, J., Tong, J., He, C., and Cheng, Y. Llama-moe: Building mixture-of-experts from llama with continual pre-training. arXiv preprint arXiv:2406.16554, 2024b. URL https://arxiv.org/abs/2406.16554. +Zhuang, H., Weng, Z., Wei, H., Xie, R., Toh, K.-A., and Lin, Z. ACIL: Analytic class-incremental learning with absolute memorization and privacy protection. In NeurIPS, 2022a. +Zhuang, H., Weng, Z., He, R., Lin, Z., and Zeng, Z. GKEAL: Gaussian kernel embedded analytic learning for few-shot class incremental task. In CVPR, 2023. +Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N., Tatikonda, S., Duncan, J., and Liu, T. Surrogate gap minimization improves sharpness-aware training. arXiv preprint arXiv:2203.08065, 2022b. + +# ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think Supplementary Material + +# A. Experimental Details + +In this section, we provide an overview of zeroth-order optimization algorithms and the function settings used for the trajectory analysis. + +# A.1. Concise Overview of Zeroth-Order Estimation + +Zeroth-order optimization aims to minimize/maximize an objective function $f: \mathbb{R}^n \to \mathbb{R}$ without derivative information. The core problem is formulated as $\min_{\theta \in \mathbb{R}^n} L(\theta)$ , where $\theta$ denotes the optimization variable. To enable gradient-based updates, Simultaneous Perturbation Stochastic Approximation (SPSA(Spall, 1992)) is a commonly used technique to approximate gradients by perturbing the input variables. Specifically, the gradient $\hat{\nabla} L(\theta)$ at point $\theta$ is estimated as: + +$$ +\hat {\nabla} L (\theta , \xi ; B) = \frac {L (\theta + \epsilon \xi ; B) - L (\theta - \epsilon \xi ; B)}{2 \epsilon} \cdot \xi^ {- 1}, \tag {4} +$$ + +where $\xi \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ is a random perturbation vector, and $\epsilon > 0$ is a small perturbation step size (typically adjusted during optimization). + +ZO-SGD(Ghadimi & Lan, 2013): Using the gradient estimator $\hat{\nabla} L(\theta, \xi; B)$ , zeroth-order algorithms, such as ZO-SGD, follow the iterative update rule: + +$$ +\theta_ {t + 1} = \theta_ {t} - \eta_ {t} \cdot \hat {\nabla} L \left(\theta_ {t}, \xi_ {t}; B\right), \tag {5} +$$ + +where $\eta_{t}$ is the learning rate at step $t$ . ZO-SGD bypasses explicit gradient computation through local function evaluations, making it suitable for high-dimensional, non-convex optimization problems. + +ZO-SGD-Sign(Liu et al., 2019): A variant of ZO-SGD, known as ZO-SGD-Sign, improves upon the original approach by approximating the gradient direction using the sign of the gradient estimate. The update rule becomes: + +$$ +\theta_ {t + 1} = \theta_ {t} - \eta_ {t} \cdot \operatorname {s i g n} (\hat {\nabla} L (\theta_ {t}, \xi_ {t}; B)), \tag {6} +$$ + +where $\mathrm{sign}(\cdot)$ denotes the element-wise sign function. This approach often leads to faster convergence in some problems where the magnitude of the gradient is not as important as its direction. + +ZO-SGD-Conserve(Bergou et al., 2020): ZO-SGD-Conserve is another variant that conservatively selects the update direction by locally comparing three candidate points, rather than directly committing to a single gradient step. The update rule for this method is: + +$$ +\theta_ {t + 1} = \arg \min _ {y \in \mathcal {C} _ {t}} f (y), \quad \mathcal {C} _ {t} = \left\{\theta_ {t}, \theta_ {t} - \eta_ {t} \cdot \hat {\nabla} L \left(\theta_ {t}, \xi_ {t}; B\right), \theta_ {t} + \eta_ {t} \cdot \hat {\nabla} L \left(\theta_ {t}, \xi_ {t}; B\right) \right\}, \tag {7} +$$ + +This method mitigates overly aggressive updates by evaluating possible directions and choosing the one that locally minimizes the objective function. + +ZO-Adam(Zhang et al., 2024): ZO-AdaMM (Chen et al., 2019) is the first attempt to apply the Adam family (specifically AMSGrad(Reddi et al., 2019)) to zeroth-order (ZO) optimization algorithms, providing convergence guarantees for both convex and nonconvex settings. The update rule is given by: + +$$ +\theta_ {t + 1} = \theta_ {t} - \eta_ {t} \cdot \frac {m _ {t}}{\sqrt {V _ {t}} + \epsilon}, \quad V _ {t} = \operatorname {D i a g} \left(\max \left(v _ {t}, v _ {t - 1}\right)\right), \tag {8} +$$ + +$$ +m _ {t} = \beta_ {1} m _ {t - 1} + (1 - \beta_ {1}) \hat {\nabla} L (\theta_ {t}, \xi_ {t}; B), v _ {t} = \beta_ {2} v _ {t - 1} + (1 - \beta_ {2}) (\hat {\nabla} L (\theta_ {t}, \xi_ {t}; B)) ^ {2}, +$$ + +In our implementation, we simply replace SGD with Adam for convenience, referring to this variant as ZO-Adam. Nevertheless, we also provide a reference implementation of the original oracle ZO-AdaMM algorithm. + +Forward Gradient Descent (FGD)(Baydin et al., 2022): FGD replaces backpropagation with forward-mode automatic differentiation to estimate gradient directions using Jacobian-vector products (JVPs). Instead of computing full gradients via reverse-mode automatic differentiation (AD), FGD samples probe vectors to construct unbiased estimators of the gradient direction. A typical FGD update step is: + +$$ +\theta_ {t + 1} = \theta_ {t} - \eta_ {t} \cdot \mathrm {J V P} _ {\theta_ {t}} (v _ {t}) = \theta_ {t} - \eta_ {t} \cdot \left. \frac {d f (\theta)}{d \theta} \cdot v _ {t} \right| _ {\theta = \theta_ {t}}, \tag {9} +$$ + +where $v_{t}$ is a random probe vector (e.g., Rademacher or Gaussian), and $\mathrm{JVP}_{\theta_t}(v_t)$ represents the forward-mode gradient approximation in direction $v_{t}$ . FGD enables training when reverse-mode AD is impractical or unavailable, and offers flexibility for hardware or software systems that only support forward execution. We denote Forward as FGD throughout this paper. + +# A.2. Function Settings + +Following the setup in CAGrad (Liu et al., 2021), we visualize the optimization behavior of first-order (FO) and zeroth-order (ZO) methods in mitigating forgetting. Specifically, we consider a two-dimensional parameter space $\theta = (\theta_{1},\theta_{2})\in \mathbb{R}^{2}$ , with the following task-specific loss functions: $L_{1}(\theta) = c_{1}(\theta)f_{1}(\theta) + c_{2}(\theta)g_{1}(\theta)$ for the old task (orange), and $L_{2}(\theta) = c_{1}(\theta)f_{2}(\theta) + c_{2}(\theta)g_{2}(\theta)$ for the new task (red). The parameter point is initialized at $[-8.5, - 5]$ to be closer to old tasks, facilitating better adaptation to them. The contour plot in Figure 3 illustrates the overall objective function defined as $L(\theta) = L_{1}(\theta) + L_{2}(\theta)$ , where the $x-$ and $y$ axes correspond to $\theta_{1}$ and $\theta_{2}$ , respectively. + +$$ +f _ {1} (\theta) = \log \left(\max \left(| 0. 5 (- \theta_ {1} - 7) - \tanh (- \theta_ {2}) |, 5 \times 1 0 ^ {- 6}\right)\right) + 6, +$$ + +$$ +\begin{array}{l} f _ {2} (\theta) = \log \left(\max \left(| 0. 5 (- \theta_ {1} + 3) - \tanh (- \theta_ {2} + 2) |, 5 \times 1 0 ^ {- 6}\right)\right) + 6, \\ g _ {1} (\theta) = \frac {(- \theta_ {1} + 7) ^ {2} + 0 . 1 (\theta_ {2} - 8) ^ {2}}{1 0} - 2 0, \\ g _ {2} (\theta) = \frac {(- \theta_ {1} - 7) ^ {2} + 0 . 1 (\theta_ {2} - 8) ^ {2}}{1 0} - 2 0, \\ \end{array} +$$ + +$$ +c _ {1} (\theta) = \max \left(\tanh (0. 5 \cdot \theta_ {2}), 0\right), \quad c _ {2} (\theta) = \max \left(\tanh (- 0. 5 \cdot \theta_ {2}), 0\right). +$$ + +# B. Additional Results + +# B.1. Comprehensive Analysis of Memory Usage on ZeroFlow + +![](images/e7cbcdbd21999512c03b0c3da327f6a7653fb9f0b97f3aae4b95115850036fe4.jpg) +Figure 9: Comparison of Memory Usage between FO-SGD and ZO-SGD with Different Batch Sizes. $\Delta$ denotes the increase in memory usage of the final task compared to the initial one. + +In this subsection, we provide a detailed comparison of memory usage during incremental learning to demonstrate the storage efficiency of ZeroFlow (ZO-SGD) compared to its counterpart, FO-SGD. Figure 9 illustrates the peak memory usage of MEMO when trained on the CIFAR-100 dataset. The backbone is fixed as a pretrained ViT-B/16-IN1K model, which is subsequently fine-tuned with batch sizes ranging from 64 to 512. The experimental results highlight the following observations: + +First, doubling the training batch size significantly increases the memory consumption of FO-SGD, requiring more GPU resources. For instance, completing the entire incremental training process on FO requires 1, 2, 3, and 6 GPUs, respectively, for batch sizes of 64, 128, 256, and 512, with each GPU equipping with 24GB of memory. In contrast, ZO-SGD training consistently requires only one GPU resource. + +Second, as training progresses, the memory demands for larger batch sizes increase rapidly. For FO, the memory consumption for 512 batches at stage 5 grows by 30.08 GB compared to the initial stage. In contrast, ZO-SGD shows a modest increase of only 3.92 GB, maintaining a low growth rate. As training advances, the memory efficiency of ZO-SGD becomes more pronounced, especially for model-expansion based CL models. + +# B.2. More Observations on Optimization Trajectories during Overcoming Forgetting + +![](images/62bb40b60480a3ca6278f917f8d06a4d48fc6f033d4b06234a362134850528bd.jpg) +(a) FO-Adam + +![](images/f51cfb882022c2fe74c5bfd1be17c660e5ed8420874c676677e323ad114c010d.jpg) +(b) ZO-Adam + +![](images/1f85c385e6949d1e741852b5454eff9d9c1192b8bf5de48f6e5d4f933e76d796.jpg) +(c) ZO-Adam $(q = 4)$ + +![](images/a1cc5c7224b96a36e1d8f34224c3e8c1e9e19338c0e30a9b77f230e2d3402c58.jpg) +(d) ZO-Adam-Sign + +![](images/d0c0a37a244bf5810e64c191dc5ba0ff04ecb13f274e976fd0a858d68875780f.jpg) +(e) ZO-Adam-Conserve + +![](images/02d73ca785ae03203c8c8bd5fa6b79fb400cc0910fe479475419c6f6693ea7fe.jpg) +(f) FO-SGD + +![](images/299ae94e9520a7d04396ab8b634d8da48d18612e480b61ab3b5552b6f3d0889d.jpg) +(g) ZO-SGD + +![](images/43d6f7766c67919e46f28784f7b139e3f8e21c2d07ebc78a8fbc5c92f3d60971.jpg) +(h) ZO-SGD $(q = 4)$ + +![](images/e07f5b08cb74c5647687e0bea744b551e1d20c94022f72b926d459c120f27253.jpg) +(i) ZO-SGD-Sign + +![](images/124bacd8ba7c533dfd2913033ed3dbc2a235eb5112482e8b6b6406e938e5fd57.jpg) +(j) ZO-SGD-Conserve + +![](images/9023b8f6ff54f80a3c99e7307d15e0a5c01098e3fd2be935e04cdc6f96b1da50.jpg) +(k) FO-Adam + +![](images/2725532c3a0764ed13469ff20f4abc69c93d864bfc0ebb48b1ba6f272e764d0e.jpg) +(1) ZO-Adam + +![](images/580b241999224501942925fe71b5161459f66222026dc3233d54a03b3bc9abdc.jpg) +(m) ZO-Adam $(q = 4)$ + +![](images/5779223a504b787ba070238de74be0f12122a82ce025a6811794b486675b0b0d.jpg) +(n) ZO-Adam-Sign + +![](images/b48286929a3ca2025819f96ac9bd5ecc7935785eb3b7931c3d874911f8d62e36.jpg) +(o) ZO-Adam-Conserve + +![](images/132a5c2dadaa2aa28616967f79fab9b28d803e1355dc1514a4712d4041ba1553.jpg) +(p) FO-SGD + +![](images/7deb02213f1c6c46bef9f9aeec88f91496f6f421e7d4f5ce85f566f40b0c9c99.jpg) +(q) ZO-SGD + +![](images/3f23de0c2ccde05a9165fe0649496ebbfef5518c0c24291d6c1e91c65cdef822.jpg) +(r) ZO-SGD $(q = 4)$ + +![](images/d6cea367b91923239d5b13b5e7f221c2c8c6d69ab093155fc89b112055e1b22d.jpg) +(s) ZO-SGD-Sign +Figure 10: The Trajectory of Different Optimization during Overcoming Forgetting. The first and last two rows are trained for 100k steps with learning rates of 0.001 and 0.01, respectively. Red denotes the minimum of new task, orange denotes the minimum of old task. The cyan trajectory taken when using the total loss from both tasks. + +![](images/ab2d8898f6b4e205efe64ac78597329010333a962d12d6ffbf0c100cabbd4768.jpg) +(t) ZO-SGD-Conserve + +In this subsection, we present a different scenario where the model is initialized at a local minimum $\theta_{1},\theta_{2} = \{-4.0,5.0\}$ surrounded by intricate valleys, but training with different learning rate as shown in Figure 10. For a learning rate of + +0.001, the first-row subfigures demonstrate that Adam using both FO and ZeroFlow stagnate in the valley. Even with bias correction, the Adam optimizer still fails to escape the local region without sufficient momentum. However, ZO-Adam-Sign seems to successfully optimize towards the region around the global minimum. Unlike ZO-Adam, ZO-Adam-Sign applies the gradient using a sign function, which outputs either +1 or -1 depending on the gradient direction. This discrete update method, which lacks continuous gradient information, causes ZO-Adam-Sign to take larger, step-like jumps. Particularly in the early stages, where gradient information is sparse or noisy, this leads to more fluctuations and introduces greater randomness in the optimization process, helping it to cross over the valleys. The second-row subfigures use SGD as the base optimizer. We observe that, except for ZO-SGD-Sign, both ZeroFlow and FO-SGD converge effectively. This can be attributed to SGD's simple update rule based on function values. Notably, FO-SGD escapes the valley by leaping to a higher and flatter region, while ZeroFlow demonstrates the ability to traverse beneath valleys. With a higher learning rate of 0.01, FO-Adam, ZO-Adam with four queries, and ZO-Adam-Sign escape the local region more easily. However, ZO-Adam still stagnates along the valley, demonstrating the stabilizing effect of multiple query loops. Similarly, ZO-Adam-Conserve suffers from the risk of an overly conservative strategy. ZO-SGD also fails to converge to the optimum due to gradient explosion caused by the large learning rate. In contrast, ZeroFlow shows minimal degradation despite its inherent randomness. + +As a result, the behavior of ZeroFlow—sometimes escaping the valley but failing to converge to the optimum, and sometimes getting trapped with low query counts but not with higher ones—highlights the trade-off between randomness and stability during updates. With larger search loops, lower learning rates, and more stable update steps, the model becomes increasingly prone to getting stuck in local minima, especially in continual learning scenarios where balancing old and new tasks introduces additional complexity. + +# B.3. Extra Evaluation on Memory Replay Methods + +We further evaluate the performance of ZeroFlow when applied to a representative replay-based method (MEMO (Zhou et al., 2023c), replay buffer $= 2000$ ), to demonstrate its broader applicability. As shown below, ZeroFlow consistently remains stable in mitigating forgetting. Notably, although the average accuracies exhibit slight gaps compared to FO methods, the accuracies at the final stage progressively approach or even surpass those of the FO baselines on the CIFAR-100 dataset. + +Table 6: Accuracy Results on MEMO. + +
MethodOptimizerStrategyCIFAR-100ImageNet-A
AvgLastAvgLast
MEMOSGDFO87.4379.6653.1538.97
ZO85.9279.0046.8725.81
Sign85.7279.1053.3138.18
Conserve85.8679.2047.2028.51
AdamFO86.4576.1754.0641.54
ZO85.8678.5952.7039.01
Sign86.1676.3853.1039.82
Conserve85.8977.7153.2039.57
-Forward84.6376.3253.5940.64
+ +# B.4. Memory and Time Efficiency on Larger Transformers + +To assess the scalability of ZeroFlow, we evaluated its efficiency on two larger vision transformers, ViT-L/16 and ViT-H/14. As shown below, ZeroFlow consistently offers substantial memory savings across all model sizes. Notably, even when using ZO-SGD-Sign, the runtime remains faster than that of standard FO optimization. + +# B.5. Longer Task Sequence + +To further assess robustness, we evaluate performance on an extended task sequence consisting of 20 tasks. As shown below, ZeroFlow continues to deliver comparable performance. Additionally, following (Wang et al., 2024), we additionally + +Table 7: Evaluation on laser transformers. + +
OptimizerViT-B/16ViT-L/16ViT-H/14
Memory↓Runtime↓Memory↓Runtime↓Memory↓Runtime↓
FO-SGD12.08GB59.3s33.27GB65.0s78.09GB190.1s
ZO-SGD (q=1)2.41GB32.4s3.77GB47.0s6.45GB118.7s
ZO-SGD (q=4)2.41GB111.7s3.77GB178.3s6.45GB442.6s
ZO-SGD-Sign2.41GB32.4s3.77GB48.7s6.45GB119.3s
ZO-SGD-Conserve2.41GB70.1s3.77GB108.9s6.45GB222.3s
Forward3.94GB45.9s5.82GB142.0s9.85GB372.5s
+ +Table 8: Additional Experimental Results of EASE on 20 Sequential Tasks. + +
MethodOptimizerStrategyAvgLastFWTBWT
EASESGDFO87.3280.20-6.89-6.79
ZO82.6575.98-8.33-7.71
Sign83.4776.13-8.01-7.22
Conserve82.2075.94-8.64-7.93
AdamFO86.6778.19-7.17-6.80
ZO84.0776.89-7.92-7.19
Sign84.1676.90-7.95-7.20
Conserve83.8276.76-8.04-7.07
-Forward82.8476.32-8.25-10.84
+ +adopt the FWT and BWT metrics to assess the overall performance of ZeroFlow. FWT (Forward Transfer) quantifies the average influence of prior knowledge on the learning of new tasks, while BWT (Backward Transfer) measures the average influence of learning new tasks on the performance of previously learned $K - 1$ tasks. + +$$ +\mathrm {F W T} = \frac {1}{K - 1} \sum_ {j = 2} ^ {K} \left(a _ {j, j} - \tilde {a} _ {j}\right), \quad \mathrm {B W T} = \frac {1}{K - 1} \sum_ {j = 1} ^ {K - 1} \left(a _ {K, j} - a _ {j, j}\right) \tag {10} +$$ + +Here, $a_{k,j}$ denotes the accuracy on task $j$ after training on the $k$ -th dataset, and $\tilde{a}_j$ represents the accuracy of a random initialized model trained only on dataset $\mathbb{D}_j$ . \ No newline at end of file diff --git a/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/images.zip b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7a349abecf02d7625a923ff1fb2a10d2b153e1a6 --- /dev/null +++ b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a09ef8ec5e61327dfca90a9e11b3d572e3e48e6e0852b2788783022e4b76b50a +size 1162003 diff --git a/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/layout.json b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..eda8397f6f7023d43f1c8fe0502c1d6f0484f178 --- /dev/null +++ b/ICML/2025/ZeroFlow_ Overcoming Catastrophic Forgetting is Easier than You Think/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d67540535a32989274b18e7a0c32cc1343cd58412e84e8e40a4c9a8572ab511b +size 653993 diff --git a/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_content_list.json b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..28ac1c7c15ee95291ae4b10bbb817fc9d5eb59bc --- /dev/null +++ b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:983c14ad17bf956a619be261d3f3867b5a69358cce6f69d7d21322ae31436ec0 +size 68273 diff --git a/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_model.json b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7b8355a85fcaadc55c8166f380283d430b3e3259 --- /dev/null +++ b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4844415273f6eb96672c66bce4a7d81149b8cd21dd62f9ad7ed72ca6bc1f67db +size 84573 diff --git a/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_origin.pdf b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..631d6d09903e6180908bda9b6e5373096b77e22d --- /dev/null +++ b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/dc43e24f-9f75-429e-86a4-434685053d80_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2aeb808cc0a2e9394d7aaa6cc842574a20c2763ed8001f7cc9130b4f342b0dc +size 4846766 diff --git a/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/full.md b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3eb8a8b8637feb56111dee63bb39c20dc4528a15 --- /dev/null +++ b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/full.md @@ -0,0 +1,282 @@ +# ZipAR: Parallel Autoregressive Image Generation through Spatial Locality + +Yefei He $^{1,2}$ Feng Chen $^{3}$ Yuanyu He $^{1}$ Shaoxuan He $^{1}$ Hong Zhou $^{1}$ Kaipeng Zhang $^{2}$ Bohan Zhuang + +Prompt: A bustling downtown street in Tokyo at night, with neon signs, sidewalks, and tall skyscrapers + +![](images/d21146e009c6059a53df706311bb70fb1ea7af88d0a2f3ef32e946875cceec40.jpg) + +Prompt: A mountain lake at sunrise, with mist rising off, and snow-capped peaks in the background + +![](images/13619db322cf5a860c1c036d6d4b661a80230f29881b7567c29473ed8127baf7.jpg) +Figure 1: Up to $91\%$ forward step reduction with ZipAR. Samples are generated by Emu3-Gen model with next-token prediction paradigm (the first column) and ZipAR (the right three columns). + +Steps: 8190 + +Steps: 2849 (-65%) + +Steps: 1425 (-82%) + +Steps: 713 $(-91\%)$ + +# Abstract + +In this paper, we propose ZipAR, a training-free, plug-and-play parallel decoding framework for accelerating autoregressive (AR) visual generation. The motivation stems from the observation that images exhibit local structures, and spatially distant regions tend to have minimal interdependence. Given a partially decoded set of visual tokens, in addition to the original next-token prediction scheme in the row dimension, the tokens + +1Zhejiang University, China 2Shanghai AI Laboratory, China 3The University of Adelaide, Australia. Correspondence to: Hong Zhou , Kaipeng Zhang . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +corresponding to spatially adjacent regions in the column dimension can be decoded in parallel. To ensure alignment with the contextual requirements of each token, we employ an adaptive local window assignment scheme with rejection sampling analogous to speculative decoding. By decoding multiple tokens in a single forward pass, the number of forward passes required to generate an image is significantly reduced, resulting in a substantial improvement in generation efficiency. Experiments demonstrate that ZipAR can reduce the number of model forward passes by up to $91\%$ on the Emu3-Gen model without requiring any additional retraining. + +![](images/1674fdbc0dfa27b72f1868716aa41e3a348fe2163fd42564468439040c8e10b0.jpg) +(a) Raster order AR Modeling + +![](images/5828699fa6cff6f3fbd5b770053d50620ffd94deb411f87a35f54f15d279a4ed.jpg) +(b) Medusa/Jacobi + +![](images/274f7e64de465008205baf2e8cdce98897cab228f5e15d475901bbd16ba493ab.jpg) +(c) MAR +Figure 2: (a) An overview of the training and decoding pipeline for autoregressive (AR) visual generation models. For models trained with a next-token prediction objective, each forward pass generates a single visual token. (b) Medusa (Cai et al., 2024) and Jacobi (Santilli et al., 2023) decoding predict multiple adjacent tokens in sequence order. (c) MAR (Li et al., 2024) predicts multiple tokens in a random order. (d) The proposed ZipAR predicts multiple spatially adjacent tokens. + +![](images/474df4e245753aacd2f3bf926effb2f153f93a66f0d13c01fa1ef5fe6f5f7329.jpg) +(d)ZipAR + +# 1. Introduction + +Recent advancements in large language models (LLMs) with the "next-token prediction" paradigm (Achiam et al., 2023; Vavekanand & Sam, 2024; Team et al., 2023) have demonstrated remarkable capabilities in addressing text-related tasks. Building on these successes, many studies (Liu et al., 2024a; Wang et al., 2024b; Team, 2024; Ge et al., 2024; Wu et al., 2024a) have extended this paradigm to the generation of visual content, leading to the development of autoregressive (AR) visual generation models. These models not only produce high-fidelity images and videos that rival or even exceed the performance of state-of-the-art diffusion models but also facilitate unified multimodal understanding and generation (Wang et al., 2024a; Chen et al., 2025; Wu et al., 2024a;b). However, their slow generation speed remains a significant barrier to widespread adoption. To generate high-resolution images or videos, these models must sequentially produce thousands of visual tokens, requiring numerous forward passes and resulting in high latency. + +To reduce the number of forward passes required for generating lengthy responses, several studies (Cai et al., 2024; Santilli et al., 2023; Chen et al., 2023) have proposed the "next-set prediction" paradigm for LLMs, as depicted in Figure 2(b). These approaches involves introducing multiple decoding heads (Cai et al., 2024) or small draft models (Chen et al., 2023), which generate several candidate tokens that are later evaluated by the original model. However, these methods incur additional costs, as they require extra draft models or the training of new decoding heads. Another approaches use the jacobi decoding methods (Santilli et al., 2023; Fu et al., 2024; Teng et al., 2024), iteratively updates sequences of tokens until convergence. However, in practice, the acceleration achieved by these methods is marginal, as LLMs often fail to generate correct tokens when errors exist in preceding ones. Furthermore, none of these approaches exploit the unique characteristics of visual content, and a parallel decoding framework specifically tailored for AR visual generation has yet to be developed. + +In this paper, we introduce ZipAR, a parallel decoding framework designed to accelerate AR visual generation. + +![](images/101f0ed47f8af83757764e41c98e6d492caad3500ca9fe6721b8aacb98ee8a71.jpg) +(a) Layer 10, Lumina-mGPT + +![](images/c2648d9e5886d82db53641df9d24125c6cae4ec5a56130d9798635d45f492e59.jpg) +(b) Layer 10, LlamaGen +Figure 3: The attention scores of visual tokens in the Lumina-mGPT-7B (Liu et al., 2024a) and LlamaGen-XL (Sun et al., 2024) models.Slash lines indicate that significant attention scores are allocated to tokens at fixed intervals, corresponding to tokens in the same column of previous rows. The full attention scores are presented by storing the attention scores of each visual token during decoding and concatenating them. + +![](images/cfab374ad2d2b6230c458d2fc83e29b11d892df1422334289c80927d4e6693a0.jpg) +(c) Layer 20, LlamaGen + +As depicted in Figure 2(a), common AR visual generation models produce visual tokens in a raster order, where the first token in a row cannot be generated until the last token in the preceding row is decoded despite their spatial separation. However, visual content inherently exhibits strong locality, which is a widely utilized inductive bias for visual tasks (Liu et al., 2021; Zhang et al., 2022; LeCun et al., 1989; Krizhevsky et al., 2012; Zeiler & Fergus, 2014). Specifically, there are significant spatial correlations between spatially adjacent tokens (e.g., token 5 and token 1 in Figure 2(a)) compared to tokens that are adjacent only in the generation order (e.g., token 5 and token 4), which makes the raster-order sequential dependency suboptimal. Empirical evidence, as shown in Figure 3, further supports this observation, with significant attention allocated to tokens in the same column of the previous row. This motivates us to propose decoding tokens from the next row without waiting for the full decoding of the current row, enabling the parallel decoding of multiple tokens in a single forward pass. Specifically, a predefined window size determines whether two tokens are spatially adjacent. Tokens outside this window in adjacent rows are considered irrelevant. Consequently, once the number of generated tokens in a row exceeds the window size, decoding of the next row begins in parallel with the current row. With an appropriately chosen window size, multiple rows can be decoded simultaneously. Unlike Medusa (Cai et al., 2024), which employs auxiliary heads, all tokens generated in parallel by ZipAR are produced using the original model head. Moreover, to address the limitation that manually tuned window size may not optimally adapt to varying attention distributions across tokens, we introduce an adaptive window size assignment scheme. This scheme dynamically adjusts the local window size during generation, ensuring that each token is generated with a window size tailored + +to its contextual requirements. As a result, ZipAR can be seamlessly implemented in a training-free, plug-and-play manner for autoregressive visual generation models, without introducing additional overhead. Experiments across multiple autoregressive visual generation models demonstrate the effectiveness and robustness of ZipAR, achieving forward steps reductions of $91\%$ , $75\%$ , and $81\%$ on Emu3-Gen, Lumina-mGPT-7B, and LlamaGen-XL models, respectively, with minimal degradation in image quality. + +In summary, our contributions are as follows: + +- We propose a spatially-aware parallel decoding strategy that enables inter-row token generation by leveraging the inherent spatial locality of visual content. Once the number of generated tokens in a row exceeds a window size, decoding of the next row begins in parallel. +- We propose an adaptive window size assignment scheme that dynamically adjusts the local window size for each token during generation, optimizing decoding efficiency while ensuring the contextual information essential for producing high-quality tokens. +- By integrating these techniques, we present ZipAR, a training-free, plug-and-play framework that achieves significant acceleration in autoregressive visual generation. Extensive experiments demonstrate its effectiveness and robustness across multiple AR visual generation models. + +# 2. Related Work + +# 2.1. Autoregressive Visual Generation + +The success of Transformer models in text-based tasks has inspired studies (Van Den Oord et al., 2017; Esser et al., + +2021; Yu et al., 2023) to apply autoregressive modeling to visual content generation. These methods can be classified into two main categories: GPT-style approaches that utilize the next-token prediction paradigm (Esser et al., 2021; Wang et al., 2024b; Liu et al., 2024a; Sun et al., 2024) and BERT-style approaches that employ masked prediction models (Chang et al., 2022; 2023; Li et al., 2024; Yu et al., 2023). More recently, VAR (Tian et al., 2024) modified the traditional next-token prediction paradigm to next-scale prediction, resulting in faster sampling speeds. Models trained using next-token prediction can leverage the infrastructure and training techniques of large language models (LLMs) and pave the way towards unified multi-modal understanding and generation. However, they are generally less efficient during sampling compared to models that predict multiple tokens in a single forward pass. In this paper, we focus on accelerating visual generation models trained with the next-token prediction objective, hereafter referred to as autoregressive visual generation models unless otherwise specified. + +# 2.2. Efficient Decoding of LLMs. + +Efforts to reduce the number of forward passes required for LLMs to generate lengthy responses can be broadly categorized into two main approaches. The first approach involves sampling multiple candidate tokens before verifying them with the base LLM. Speculative decoding (Chen et al., 2023; Liu et al., 2024b; Spector & Re, 2023; Gui et al., 2024) utilizes a small draft LLM to generate candidate tokens, which are then verified in parallel by the base LLM. While this approach can potentially generate multiple tokens in a single evaluation, deploying multiple models introduces significant memory overhead and engineering challenges. Medusa (Cai et al., 2024) addresses this by employing multiple decoding heads for the base LLM, enabling self-speculation. However, due to the large vocabulary size of LLMs, the parameters in each decoding head can be substantial. The second approach, Jacobi decoding (Santilli et al., 2023; Teng et al., 2024), involves randomly guessing the next n tokens in a sequence, which are iteratively updated by the LLMs. Over time, the n-token sequence converges to the same output as that generated by the next-token prediction paradigm. However, in practice, vanilla Jacobi decoding offers only marginal speedup over autoregressive decoding. This limited improvement is largely due to the causal attention mechanism, which rarely produces a correct token when preceding tokens are incorrect. Lookahead (Fu et al., 2024) decoding enhances efficiency by leveraging n-grams generated from previous Jacobi iterations, which are verified in parallel during the decoding process. CLLMs (Kou et al., 2024) further improves the efficiency of Jacobi decoding by fine-tuning the model with a consistency loss, requiring it to map arbitrary + +points on the Jacobi trajectory to a fixed point. However, none of these approaches are designed for autoregressive visual generation or incorporate visual inductive biases. In contrast, the proposed ZipAR takes advantage of the spatial locality inherent in visual content, offering significant acceleration without the need for retraining. Moreover, ZipAR is orthogonal to the aforementioned methods, and can be combined with them to achieve even greater acceleration. + +# 3. Method + +# 3.1. Preliminaries + +Autoregressive (AR) visual generation models with the next-token prediction paradigm have shown exceptional versatility across various vision-language tasks, including generating high-quality images and videos. As shown in Figure 2(a), pre-trained VQ-VAE models (Van Den Oord et al., 2017; Esser et al., 2021) are commonly employed to convert images or videos into visual tokens. The process begins with a visual encoder that extracts feature maps at a reduced spatial resolution. These feature maps are then subjected to vector quantization to produce discrete latent representations, known as visual tokens. These tokens are arranged in a one-dimensional sequence to serve as input for AR models. Although various methods exist to flatten these tokens, the row-major order (raster order) is empirically validated to offer the best performance (Esser et al., 2021), making it the prevalent method for visual generation. During the image generation phase, AR models generate visual tokens sequentially in this raster order. Finally, the complete sequence of visual tokens is rearranged into a two-dimensional structure and processed through a visual decoder to reconstruct the images. + +# 3.2. Inference with ZipAR + +As analyzed in Section 3.1, AR visual generation models with a raster order generate visual tokens row by row, completing each row sequentially from left to right before proceeding to the next. However, images inherently exhibit strong spatial locality. Intuitively, in a high-resolution image, the starting pixel of a row is more closely related to the starting pixel of the preceding row than to the ending pixel of the preceding row due to their spatial proximity. Empirical evidence, as shown in Figure 3, also indicates that significant attention scores are allocated to tokens within the same column of the previous row. Building on these observations, we propose ZipAR, a simple yet effective parallel decoding framework for autoregressive visual generation models. Unlike conventional parallel decoding methods that predict multiple consecutive tokens in a single forward pass, our approach decodes tokens from different rows in parallel. The key idea is that it is unnecessary to wait for an entire row to be generated before initiating the decoding of the + +![](images/ce482180b194bdc0386bc0cd696dfaaf515655bcdffce39523fb7d9286bb70c3.jpg) +Figure 4: A toy example of the ZipAR framework. The window size is set to 2 in this toy example. + +next row, as spatially distant tokens contribute minimally to attention scores. + +To formalize this, we define a local window size $s$ . Given the tokens $x_{i,j}$ located in row $i$ and column $j$ , we assume that tokens beyond $x_{i-1,j+s}$ in the previous row have a negligible impact on the generation of $x_{i,j}$ based on the spatial locality of visual tokens. Consequently, the criterion for initiating the generation of token $x_{i,j}$ can be formulated as: + +$$ +C (i, j) = \left\{ \begin{array}{l l} 1, & \text {i f} \left\{x _ {i - 1, k} \mid j \leq k < j + s \right\} \subseteq \mathbb {D} \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {1} +$$ + +Here, $\mathbb{D}$ denotes the set of decoded tokens, and $C(i,j) = 1$ indicates that token $x_{i,j}$ is ready to be generated. Once the first token in a row is generated, subsequent tokens in the row can be generated sequentially, along with the unfinished portion of the preceding row, following a next-token prediction paradigm. An illustration of the ZipAR framework with a window size of 2 is shown in Figure 4. + +However, to initiate the decoding of the first token $x_{i,0}$ in row $i$ , the last token of the row $i - 1$ is required as input to the autoregressive model, despite it has not yet been generated in the ZipAR framework. To address this, we propose several solutions tailored to different types of AR visual generation models. Some methods (Liu et al., 2024a; Wang et al., 2024b) support generating images with dynamic resolutions, typically by appending extra end-of-row tokens at the end of each row. With these special tokens placed at fixed positions, we can insert the end-of-row tokens in advance when initiating the generation of the next row. Since the values of these tokens are predetermined, there is no need to update them subsequently. Conversely, for models that lack end-of-row tokens (Sun et al., 2024), we temporarily assign values to the last token in row $i - 1$ to decode token $x_{i,0}$ . This value can be derived from the most spatially adjacent token that have been decoded. + +![](images/758c1f47a9fe30ac4c9a8a0243d5cc0c01e6b9f2547ff91615aa1d88e223947c.jpg) +Figure 5: The local window size required to retain $95\%$ of attention scores across different rows and input prompt. Data is collected from the first token of each row in LuminamGPT-7B model with input prompt from COCO (Lin et al., 2014) and Parti (Yu et al., 2022) dataset. + +# 3.3. Adaptive Window Size Assignment + +While ZipAR with a predefined local window size demonstrates improved efficiency, the window size remains a hyperparameter that requires manual tuning to balance image fidelity and generation efficiency. Moreover, using a fixed window size for all token positions is suboptimal, as the attention distributions vary significantly across tokens. As illustrated in Figure 5, the local window size needed to retain $95\%$ of attention scores differs across token positions and input prompts. Consequently, maintaining a fixed window size throughout the image generation process can lead to suboptimal results, potentially compromising image fidelity. + +To address this, we propose an adaptive window size assignment scheme that dynamically adjusts the local window size during the generation process. Given a minimum window size $s_{min}$ , after generating token $x_{i,s_{min}-1}$ in row $i$ , we attempt to generate the first token in row $i+1$ . Unlike the + +fixed window size approach, we do not immediately accept this newly generated token, as the current local window size may provide insufficient information. Instead, in the subsequent step, with the addition of a new token from the previous row, we regenerate the token using a slightly larger window size $s_{\text{min}} + 1$ and apply an acceptance criterion to evaluate its validity based on the predictions from both steps. If the criterion is satisfied, subsequent tokens in row $i + 1$ can be generated sequentially, following a next-token prediction paradigm. Otherwise, the current window size is deemed inadequate, and we iteratively expand it until the criterion is met or the previous row is fully generated. + +Specifically, we adopt a rejection sampling scheme analogous to speculative decoding (Leviathan et al., 2023; Chen et al., 2023). For consecutive window sizes $k + 1$ and $k$ in row $i$ , we compute the ratio between their predictions $p(x|x_{0,0},\dots,x_{i,k})$ and $p(x|x_{0,0},\dots,x_{i,k - 1})$ , which quantifies how well the token sampled under the smaller window size. Formally, the criterion for initiating the generation of token $x_{i + 1,0}$ with window size $k$ can be formulated as: + +$$ +\tilde {C} (i + 1, 0) = \left\{ \begin{array}{l l} 1, & \text {i f} r < \min \left(1, \frac {p \left(x \mid x _ {0 , 0} , \dots , x _ {i , k}\right)}{p \left(x \mid x _ {0 , 0} , \dots , x _ {i , k - 1}\right)}\right), \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {2} +$$ + +Here, we sample $r \sim U[0,1]$ from a uniform distribution. $\tilde{C}(i + 1,0) = 1$ indicates that token $x_{i + 1,0}$ is ready to be generated. If the criterion is not met, we resample $x_{i + 1,0}$ from the following distribution: + +$$ +x _ {i + 1, 0} \sim \frac {\operatorname* {m a x} (0 , p (x | x _ {0 , 0} , \dots , x _ {i , k}) - p (x | x _ {0 , 0} , \dots , x _ {i , k - 1}))}{\sum_ {x} \operatorname* {m a x} (0 , p (x | x _ {0 , 0} , \dots , x _ {i , k}) - p (x | x _ {0 , 0} , \dots , x _ {i , k - 1})} \tag {3} +$$ + +The resampled token is subsequently verified in the next step. + +# 4. Experiments + +# 4.1. Implementation Details + +To assess the effectiveness of our proposed method, we integrate it with three state-of-the-arts autoregressive visual generation models: LlamaGen (Sun et al., 2024), LuminamGPT (Liu et al., 2024a) and Emu3-Gen (Wang et al., 2024b). All experiments are conducted with Nvidia A100 GPUs and Pytorch framework. For class-conditional image generation with LlamaGen on ImageNet, we report the widely adopted Frechet Inception Distance (FID) to evaluate the performance. We sample 50000 images and evaluate them with ADM's TensorFlow evaluation suite (Dhariwal & Nichol, 2021). + +# 4.2. Main Results + +# 4.2.1. CLASS-CONDITIONAL IMAGE GENERATION + +In this subsection, we quantitatively evaluate the performance of class-conditional image generation on the ImageNet $256 \times 256$ benchmark using the LlamaGen model, as summarized in Table 1. The model processes a $24 \times 24$ feature map and requires 576 forward passes to generate an image under the next-token prediction (NTP) paradigm. For the LlamaGen-L model, integrating ZipAR with a minimal window size of 16 reduces the number of forward passes by $26.7\%$ without increasing the FID score. For the LlamaGenXL model, ZipAR-12 achieves a lower FID (3.67 vs. 3.87) while requiring fewer steps than the previous parallel decoding algorithm, SJD (Teng et al., 2024) (331 steps vs. 335 steps). This highlights the efficiency of ZipAR in decoding spatially adjacent tokens in parallel. + +# 4.2.2. TEXT-GUIDED IMAGE GENERATION + +In this subsection, we expand our evaluation by assessing ZipAR's performance using multiple metrics, including VQAScore (Lin et al., 2024), Human Preference Score v2 (HPSv2) (Wu et al., 2023), ImageReward (Xu et al., 2023), and Aesthetic Score, across three models: LlamaGen-XL512, Lumina-mGPT-768, and Lumina-mGPT-1024, as presented in Table 2. For the LlamaGen-XL model, ZipAR-15 reduces the number of generation steps by $45.1\%$ without any decline in the VQAScore, Image Reward and Aesthetic Score. Similarly, for the Lumina-mGPT-768 model, ZipAR-20 achieves a $54.8\%$ reduction in generation steps while improving VQAScore, HPSv2, and Aesthetic Score. When evaluating the CLIP Score over the LlamaGen-XL model, compared to the previous parallel decoding algoithm, SJD (Teng et al., 2024), ZipAR-7 significantly improves efficiency (324 steps vs. 635 steps) while achieving a higher CLIP score (0.285 vs. 0.283). Moreover, we observe that the acceleration ratio for both text-to-image models is higher than that for the class-conditional LlamaGen-L model. This is primarily attributed to the larger spatial resolution of the feature maps and the generated images. These results suggest that ZipAR provides greater efficiency gains when generating higher-resolution images. + +# 4.3. Ablation Study + +# 4.3.1. EFFECT OF ADAPTIVE WINDOW SIZE ASSIGNMENT + +In this subsection, we evaluate the effectiveness of the proposed adaptive window size assignment scheme. Specifically, we compare the performance of ZipAR with fixed and adaptive window sizes over class-conditional LlamaGenL model, respectively. As shown in Figure 6, under similar generation steps, ZipAR with adaptive window size + +Table 1: Quantitative evaluation on ImageNet $256 \times 256$ benchmark. The generated images are $384 \times 384$ and resized to $256 \times 256$ for evaluation. Here, "NTP" denotes the next-token prediction paradigm. "ZipAR- $n$ " denotes the ZipAR paradigm with a minimal window size of $n$ . "Step" is the number of model forward passes required to generate an image. The latency is measured with a batch size of 1. + +
ModelMethodStepLatency (s)FID↓
LlamaGen-L (cfg=2.0)NTP57615.203.16
SJD (Teng et al., 2024)367 (-36.3%)10.83 (-28.8%)3.85
ZipAR-16422 (-26.7%)11.31 (-25.6%)3.14
ZipAR-14378 (-34.4%)10.16 (-33.2%)3.44
ZipAR-12338 (-41.3%)9.31 (-38.8%)3.96
LlamaGen-XL (cfg=2.0)NTP57622.652.83
SJD (Teng et al., 2024)335 (-41.8%)13.17 (-41.8%)3.87
ZipAR-16423 (-26.6%)16.46 (-27.3%)2.87
ZipAR-14378 (-34.4%)14.89 (-34.3%)3.16
ZipAR-12331 (-41.8%)13.17 (-41.8%)3.67
+ +Table 2: Quantitative results on diverse automatic evaluation approaches. Here, "NTP" denotes the next-token prediction paradigm. "ZipAR-n" denotes the ZipAR paradigm with a minimal window size of $n$ . "Step" is the number of model forward passes required to generate an image. + +
ModelMethodStepsVQAScore↑HPSv2↑Image Reward↑Aesthetic Score↑
LlamaGen-XL-512NTP10240.64390.2647-0.08185.38
ZipAR-155620.65340.2637-0.06905.39
ZipAR-114510.65810.2630-0.09825.37
ZipAR-73240.64100.2625-0.16835.33
ZipAR-31850.63430.2599-0.31215.32
Lumina-mGPT-768NTP23520.65790.27430.41646.10
ZipAR-2010630.65950.27470.39716.13
ZipAR-179150.64330.27320.30496.12
ZipAR-147400.65890.27390.36466.10
ZipAR-115880.64900.27300.28616.10
Lumina-mGPT-1024NTP41600.67180.27620.42325.97
ZipAR-2013310.67050.27610.39135.95
ZipAR-1711500.67970.27610.40185.94
ZipAR-149640.67320.27470.32985.94
ZipAR-117720.67230.27460.32225.95
+ +consistently achieves a lower FID than its fixed-window counterpart, which suggests that dynamically adjusting the window size based on token position and context enhances the fidelity of generated images. + +# 4.3.2. IMPACT ON OPTIMAL SAMPLING HYPERPARAMETERS + +As presented in Tables 4-5, we performed a grid search to determine the optimal token-sampling hyperparameters, namely, sampling temperature and classifier-free guidance scale, for ZipAR. The results are shown below. These results indicate that ZipAR sampling does not alter the optimal sampling temperature and classifier-free guidance scale. + +# 4.4. Qualitative Visualizations + +In this subsection, we present non-cherry-picked visualizations of images generated using the next-token prediction + +(NTP) paradigm and the proposed ZipAR framework over Emu3-Gen (Wang et al., 2024b) and Lumina-mGPT-7B (Liu et al., 2024a), as shown in Figures 1 and 7. Notably, ZipAR can reduce the number of model forward steps by up to $91\%$ for Emu3-Gen, while still producing high-fidelity images rich in semantic information. + +# 5. Conclusion + +In this paper, we have proposed ZipAR, a new parallel decoding framework designed to accelerate autoregressive visual generation. ZipAR leverages the spatial locality inherent in visual content and predicts multiple spatially adjacent visual tokens in a single model forward pass, thereby significantly enhancing generation efficiency compared to the traditional next-token-prediction paradigm. An adaptive local window assignment scheme with rejection sampling is employed, ensuring that each token is generated with + +Table 3: Quantitative evaluation on MS-COCO dataset. Here, "NTP" denotes the next-token prediction paradigm. "ZipAR-n" denotes the ZipAR paradigm with a minimal window size of $n$ . "Step" is the number of model forward passes required to generate an image. The latency is measured with a batch size of 1. + +
ModelMethodStepLatency (s)CLIP Score↑
LlamaGen-XL-512NTP102433.170.287
SJD (Teng et al., 2024)635 (-38.0%)24.80 (-25.2%)0.283
ZipAR-15562 (-45.1%)18.98 (-42.7%)0.287
ZipAR-11451 (-55.9%)14.65 (-55.8%)0.286
ZipAR-7324 (-68.4%)10.24 (-69.1%)0.285
ZipAR-3185 (-81.9%)5.86 (-82.3%)0.281
Luming-mGPT-7B-768NTP235291.700.313
SJD (Teng et al., 2024)1054 (-55.2%)60.27 (-34.2%)0.313
ZipAR-201063 (-54.8%)63.28 (-31.0%)0.314
ZipAR-17915 (-61.0%)58.54 (-36.2%)0.314
ZipAR-14740 (-68.5%)53.41 (-41.8%)0.313
ZipAR-11588 (-75.0%)50.32 (-45.1%)0.312
+ +![](images/05127a75a86fe3b8bc6c25333a55f57017256c9365f34524e0e69a236d775978.jpg) +Figure 6: Performance comparisons of ZipAR over class-conditional LlamaGen-L model with fixed window size and adaptive window size. Under similar step budget, ZipAR with adaptive window size always achieves lower FID. + +sufficient contextual information. Extensive experiments demonstrate that ZipAR can reduce the number of model forward steps by up to $91\%$ on the Emu3-Gen model with minimal impact on image quality. + +In the future, we anticipate that integrating ZipAR with other methods that employ the next-set-prediction paradigm, such as Medusa (Cai et al., 2024) and Jacobi decoding (Santilli et al., 2023), will further enhance acceleration ratios. + +# Acknowledgements + +This work was supported by the National Key Research and Development Program of China (2022YFC3602601) and the National Key Research and Development Program of China (2022ZD0160102). + +Table 4: The performance of LlamaGen and ZipAR under different classifier-free guidance. Here, “*” denotes the results obtained from LlamaGen's paper. + +
ModelClassifier-free GuidanceFID↓
LlamaGen-L*1.54.74
1.753.15
2.03.07
2.253.62
ZipAR-161.56.18
1.753.72
2.03.14
2.253.44
+ +Table 5: The performance of LlamaGen and ZipAR under different sampling temperatures. Here, ${}^{\prime \prime } * {}^{\prime \prime }$ denotes the results obtained from LlamaGen's paper. + +
ModelTemperatureFID↓
LlamaGen-L0.963.53
0.983.24
1.0*3.07
1.023.14
ZipAR-160.963.35
0.983.25
1.03.14
1.023.34
+ +# Impact Statement + +The proposed ZipAR framework stands out for its high efficiency, which carry significant implications in reducing the carbon emissions attributed to the widespread deployment of deep generative models. However, similar to other deep generative models, ZipAR has the potential to be utilized for producing counterfeit images and videos for malicious purposes. + +Prompt: image of a dog playing water, and a water fall is in the background + +![](images/518850ea8b1d32a30017915483f380c9bb32fc86014aae71021116b0a07650ad.jpg) + +Prompt: Image of a magical fairy tale castle on a hilltop surrounded by a mystical forest + +![](images/bb88927b358d0ad8d896e38dbb8b4beec9d112048be3fa9d5401dd85e8b1d240.jpg) + +Prompt: A bustling downtown street in Tokyo at night, with neon signs, sidewalks, and tall skyscrapers + +![](images/658baf18451814b3e9e727f7301621fee71bb825de779dd7cf56cef5281609a7.jpg) + +Prompt: A serene beach at sunset, with palm trees swaying in the breeze and a sailboat in the distance + +![](images/d86785154b9297aa1f2f9a2f7054aedc9ef9d31669ec1c2794ec81bc203462b6.jpg) + +Prompt: A medieval knight standing guard in front of a grand castle, with a dragon flying overhead + +![](images/d812f309d65f2a51a325604c8c773f0c326233b966945292af64af428c48266e.jpg) +Figure 7: Samples generated by the Lumina-mGPT-7B-768 model with next-token prediction paradigm (the first column) and ZipAR under different configurations (the right three columns). The classifier-free guidance is set to 3. + +Steps: 2355 + +Steps: 1508 (-36%) + +Steps: 1132 (-52%) + +Steps: 756 (-67%) + +# References + +Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. +Cai, T., Li, Y., Geng, Z., Peng, H., Lee, J. D., Chen, D., and Dao, T. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv:2401.10774, 2024. +Chang, H., Zhang, H., Jiang, L., Liu, C., and Freeman, W. T. Maskgit: Masked generative image transformer. In CVPR, 2022. +Chang, H., Zhang, H., Barber, J., Maschinot, A., Lezama, J., Jiang, L., Yang, M.-H., Murphy, K., Freeman, W. T., Rubinstein, M., et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. +Chen, C., Borgeaud, S., Irving, G., Lespiau, J.-B., Sifre, L., and Jumper, J. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023. +Chen, X., Wu, Z., Liu, X., Pan, Z., Liu, W., Xie, Z., Yu, X., and Ruan, C. Janus-pro: Unified multimodal understanding and generation with data and model scaling, 2025. +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 2021. +Esser, P., Rombach, R., and Ommer, B. Taming transformers for high-resolution image synthesis. In CVPR, 2021. +Fu, Y., Bailis, P., Stoica, I., and Zhang, H. Break the sequential dependency of llm inference using lookahead decoding. arXiv preprint arXiv:2402.02057, 2024. +Ge, Y., Zhao, S., Zhu, J., Ge, Y., Yi, K., Song, L., Li, C., Ding, X., and Shan, Y. Seed-x: Multimodal models with unified multi-granularity comprehension and generation. arXiv preprint arXiv:2404.14396, 2024. +Gui, L., Xiao, B., Su, L., and Chen, W. Boosting lossless speculative decoding via feature sampling and partial alignment distillation. arXiv preprint arXiv:2408.15562, 2024. +Kou, S., Hu, L., He, Z., Deng, Z., and Zhang, H. Cllms: Consistency large language models. arXiv preprint arXiv:2403.00835, 2024. +Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. NeurIPS, 2012. + +LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel, L. Handwritten digit recognition with a back-propagation network. NeurIPS, 1989. +Leviathan, Y., Kalman, M., and Matias, Y. Fast inference from transformers via speculative decoding. In ICML, 2023. +Li, T., Tian, Y., Li, H., Deng, M., and He, K. Autoregressive image generation without vector quantization. arXiv preprint arXiv:2406.11838, 2024. +Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In ECCV, 2014. +Lin, Z., Pathak, D., Li, B., Li, J., Xia, X., Neubig, G., Zhang, P., and Ramanan, D. Evaluating text-to-visual generation with image-to-text generation, 2024. URL https://arxiv.org/abs/2404.01291. +Liu, D., Zhao, S., Zhuo, L., Lin, W., Qiao, Y., Li, H., and Gao, P. Lumina-mgpt: Illuminate flexible photorealistic text-to-image generation with multimodal generative pretraining. arXiv preprint arXiv:2408.02657, 2024a. +Liu, X., Hu, L., Bailis, P., Cheung, A., Deng, Z., Stoica, I., and Zhang, H. Online speculative decoding. In ICML, 2024b. +Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. +Santilli, A., Severino, S., Postolache, E., Maiorca, V., Mancusi, M., Marin, R., and Rodola, E. Accelerating transformer inference for translation via parallel decoding. arXiv preprint arXiv:2305.10427, 2023. +Spector, B. and Re, C. Accelerating llm inference with staged speculative decoding. arXiv preprint arXiv:2308.04623, 2023. +Sun, P., Jiang, Y., Chen, S., Zhang, S., Peng, B., Luo, P., and Yuan, Z. Autoregressive model beats diffusion: Llama for scalable image generation. arXiv preprint arXiv:2406.06525, 2024. +Team, C. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024. +Team, G., Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. +Teng, Y., Shi, H., Liu, X., Ning, X., Dai, G., Wang, Y., Li, Z., and Liu, X. Accelerating auto-regressive text-to-image generation with training-free speculative jacobi decoding. arXiv preprint arXiv:2410.01699, 2024. + +Tian, K., Jiang, Y., Yuan, Z., Peng, B., and Wang, L. Visual autoregressive modeling: Scalable image generation via next-scale prediction. arXiv preprint arXiv:2404.02905, 2024. +Van Den Oord, A., Vinyals, O., et al. Neural discrete representation learning. NeurIPS, 2017. +Vavekanand, R. and Sam, K. Llama 3.1: An in-depth analysis of the next-generation large language model, 2024. +Wang, C., Lu, G., Yang, J., Huang, R., Han, J., Hou, L., Zhang, W., and Xu, H. Illume: Illuminating your llms to see, draw, and self-enhance. arXiv preprint arXiv:2412.06673, 2024a. +Wang, X., Zhang, X., Luo, Z., Sun, Q., Cui, Y., Wang, J., Zhang, F., Wang, Y., Li, Z., Yu, Q., et al. Emu3: Next-token prediction is all you need. arXiv preprint arXiv:2409.18869, 2024b. +Wu, C., Chen, X., Wu, Z., Ma, Y., Liu, X., Pan, Z., Liu, W., Xie, Z., Yu, X., Ruan, C., et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848, 2024a. +Wu, J., Jiang, Y., Ma, C., Liu, Y., Zhao, H., Yuan, Z., Bai, S., and Bai, X. Liquid: Language models are scalable multimodal generators. arXiv preprint arXiv:2412.04332, 2024b. +Wu, X., Hao, Y., Sun, K., Chen, Y., Zhu, F., Zhao, R., and Li, H. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. +Xu, J., Liu, X., Wu, Y., Tong, Y., Li, Q., Ding, M., Tang, J., and Dong, Y. Imagereward: learning and evaluating human preferences for text-to-image generation. In NeurIPS, 2023. +Yu, J., Xu, Y., Koh, J. Y., Luong, T., Baid, G., Wang, Z., Vasudevan, V., Ku, A., Yang, Y., Ayan, B. K., et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022. +Yu, L., Cheng, Y., Sohn, K., Lezama, J., Zhang, H., Chang, H., Hauptmann, A. G., Yang, M.-H., Hao, Y., Essa, I., et al. Magvit: Masked generative video transformer. In CVPR, 2023. +Zeiler, M. D. and Fergus, R. Visualizing and understanding convolutional networks. In ECCV, 2014. +Zhang, Z., Zhang, H., Zhao, L., Chen, T., Arik, S. Ö., and Pfister, T. Nested hierarchical transformer: Towards accurate, data-efficient and interpretable visual understanding. In AAAI, 2022. \ No newline at end of file diff --git a/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/images.zip b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..098f0f08080c9074d567a22639825a1f9174bd86 --- /dev/null +++ b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e425031d808a81198c60a0c7d9c69343ce89964507c1882920b241d13ce8f064 +size 997749 diff --git a/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/layout.json b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7c39065b0c2d72d4d611d611be7fa177edab0600 --- /dev/null +++ b/ICML/2025/ZipAR_ Parallel Autoregressive Image Generation through Spatial Locality/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b77f6ab151b50776974bb6095f5b8c86d2897815a040e21fa58a294975a7a54a +size 342127 diff --git a/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_content_list.json b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5ff3fc1be42d87f9b54a64b23de9339d1a21dbd3 --- /dev/null +++ b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb50db766cfc3d27805fa43ed1b637ff8916d94428d15d1afe1bb32d611e5497 +size 85281 diff --git a/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_model.json b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bed3db5d053c589bb3b681799ea374ac76e94b8f --- /dev/null +++ b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46a965a0c4168c8796f2d5a9c79c05a0b5831d511ac9ea0c5e42229262b9342f +size 108494 diff --git a/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_origin.pdf b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6814d71306fabd45fa43976aef957a1fea9d044b --- /dev/null +++ b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/485ae2a8-1fc5-402d-9e71-4d6939418c63_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3d85563c43f1a41fb544b3890634a6d2cb0adee181d5fad93b258b4e77f89f4 +size 596592 diff --git a/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/full.md b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..93a18c6850fbbd985a0347267f697541bb323543 --- /dev/null +++ b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/full.md @@ -0,0 +1,403 @@ +Zirui Liu1 Jiatong Li1 Yan Zhuang1 Qi Liu *1,2 Shuanghong Shen2 Jie Ouyang1 Mingyue Cheng1 Shijin Wang1,3 + +# Abstract + +Arena-based evaluation is a fundamental yet significant evaluation paradigm for modern AI models, especially large language models (LLMs). Existing framework based on ELO rating system suffers from the inevitable instability problem due to ranking inconsistency and the lack of attention to the varying abilities of annotators. In this paper, we introduce a novel stable arena framework to address these issues by enhancing the ELO Rating System. Specifically, we replace the iterative update method with a Maximum Likelihood Estimation (MLE) approach, m-ELO, and provide theoretical proof of the consistency and stability of the MLE approach for model ranking. Additionally, we proposed the am-ELO, which modify the Elo Rating's probability function to incorporate annotator abilities, enabling the simultaneous estimation of model scores and annotator reliability. Experiments demonstrate that this method ensures stability, proving that this framework offers a more robust, accurate, and stable evaluation method for LLMs. + +# 1. Introduction + +The rapid advancement of large language models (LLMs) (Jin et al., 2024b; Ouyang et al., 2025; Cheng et al., 2025) has led to the proliferation of "model arenas"—platforms designed to compare and evaluate multiple models, identifying their relative strengths and weaknesses (Chiang et al., 2024). These arenas play a critical role in driving innovation and shaping the deployment of cutting-edge LLMs across diverse applications. The ELO rating system (Elo, 1967), a well-established methodology for quantitatively + +$^{1}$ State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, China $^{2}$ Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China $^{3}$ iFLYTEK Co., Ltd, Hefei, China. Correspondence to: Qi Liu . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +![](images/eec39451a21d29c2cd4095500fb92db07423906b63b9ffdf5468f6f409de525c.jpg) +Figure 1. An example of ELO score. The error bar represents the standard deviation and the error line represents the difference between the maximum or minimum value and the mean value. The line chart represents the ELO scores estimated from the records of the specific annotator. + +assessing the relative capabilities of competitors in games, forms the theoretical foundation for the evaluation systems in most existing model arenas (Bai et al., 2022; Boubdir et al., 2023). + +A significant issue with the current ELO method is its instability, which can be attributed to two main factors: 1) From an algorithmic perspective, the existing ELO method treats the data as dynamic, making the results highly sensitive to the order in which the data is presented (Aldous, 2017; Li et al., 2024; Zhang et al., 2024a). In other words, when the same records are shuffled and re-evaluated, the ELO method often yields inconsistent scores. For instance, as shown in Figure 1, the significant error (highlighted in gray) complicates the comparison of models with similar abilities. 2) The judgment of human annotators varies across different aspects such as quality, relevance, and importance of texts. For example, in the line chart in Figure 1, different annotators provide inconsistent ELO scores for each model. However, the arena-based evaluation paradigm, which involves human participation, overlooks these individual differences among humans (Welinder & Perona, 2010; Raykar & Yu, 2011). + +Ignoring this variability introduces biases and instability into the evaluation process, further undermining the credibility of both the results and the decisions derived from + +![](images/e932662433623135f527ebce1baa93cca3f1de2a14bb9a63c5d3c658ebe66399.jpg) +Figure 2. The traditional iterative ELO method and our proposed am-ELO method based on MLE. + +them (Eickhoff, 2018). These instabilities diminish the interpretability and practical value of ELO scores, eroding confidence in the conclusions drawn from such evaluations, particularly when they are used to inform high-stakes decisions regarding model deployment or research directions. + +In this work, we propose a novel stable arena framework to address these shortcomings. As illustrated in Figure 2, to mitigate the inconsistencies in ELO scores, we introduce a maximum likelihood estimation (MLE)-driven ELO rating method, referred to as m-ELO. By deriving the theoretical properties of this reformulation, we demonstrate that the proposed method produces consistent results without altering the fundamental principles of the original ELO method. Furthermore, to account for variability in annotator performance, we propose an annotator ability-aware enhancement method for ELO (am-ELO), grounded in psychometrics (Morizot et al., 2009; Furr, 2021). By modifying the ELO probability function, we estimate the annotator's ability and adjust their contribution accordingly, leading to a more accurate and equitable aggregation of evaluation results. + +Through experiments on real-world datasets, we demonstrate that our framework effectively models annotators while ensuring the consistency of ELO scores. Furthermore, in simulation experiments, our method not only identifies anomalous annotators but also reduces the inconsistency of ELO scores to $30\%$ compared to the traditional ELO method. This indicates that our approach effectively mitigates the instability inherent in the traditional ELO method. + +# 2. Background and Related Work + +Arena-based evaluation is an important subfield within the broader domain of LLM evaluation. Unlike traditional evaluation paradigms (Zellers et al., 2019; Hendrycks et al., 2020; Cobbe et al., 2021; Liang et al., 2022; Jin et al., 2024a), which typically assess a model's performance against predefined benchmarks, arena-based evaluation involves models + +competing directly with others. Current research in this area can generally be divided into three key categories: Battle Scenarios, Annotators, and Ranking Systems. + +**Battle Scenario** The classic battle scenario is exemplified by the Chatbot Arena (Chiang et al., 2024), in which models respond to the same question and annotators compare their outputs. However, this approach are susceptible to the inherent biases of the annotators. To address this issue, several studies have incorporated multiple models working collaboratively to generate and evaluate responses, enabling iterative improvements (Zhao et al., 2024). Notable examples of this approach include LLMChain (Bouchiha et al., 2024) and ChatEval (Chan et al., 2024). While such strategies offer increased fairness, they come with trade-offs, including higher computational costs and potential instability. + +Annotator In arena-based evaluation, the comparison of results typically involves human annotators (Cheng et al., 2024) or highly capable LLMs, such as GPT-4 (Achiam et al., 2023) and Claude (Anthropic). Additionally, some researchers have explored the use of specialized referee models for this task, such as PandaLM (Wang et al., 2023), JudgeLM (Zhu et al., 2023), and Auto-J (Li et al., 2023), which are designed to enhance the evaluation process. + +Ranking Systems for LLM Evaluation Ranking systems play a crucial role in arena-based LLM evaluation (Busa-Fekete et al., 2014; Szörenyi et al., 2015; Chernoff, 1992). Among the existing approaches, many arena-based methods rely on the ELO Rating System to model LLMs' capabilities (Coulom, 2007; Pelanek, 2016). The ELO rating system, grounded in the Bradley-Terry model (Hunter, 2004; Rao & Kupper, 1967), is widely used in competitive games (Sismanis, 2010; Ebtekar & Liu, 2021) to predict the likelihood of one competitor outperforming another based on their relative abilities. However, due to its dynamic nature, which is tailored for traditional competitive games, the ELO system introduces instability in LLM evaluation. To mitigate this instability, existing approaches typically perform multiple random shuffles of the annotated dataset and calculate ELO scores for each iteration (Sismanis, 2010). The statistical summary, such as the mean or variance of the scores across these shuffles, is then used as the final evaluation metric. Although this strategy provides a practical solution, it does not fundamentally resolve the inconsistency introduced by the sequential updates in the ELO method. + +# 3. Preliminary + +Arena-based Evaluation is a highly anticipated method in LLMs evaluation, where models are compared head-to-head on benchmarks or datasets and the results are annotated by evaluators. Let $S = \{(i,j,k,W_{ij})|i,j\in [N],k\in [M]\}$ + +Algorithm 1 The Traditional ELO Rating System +Input: Dataset $S$ , Scaling Factor $K$ , Init Score $R_{init}$ +Initialize: Set of Scores $RS_{i}\gets \emptyset$ , Score of Models + $R_{i}\gets R_{init}$ +Calculate ELO Score: +for $(i,j,W_{ij})\in S$ do + $R_i^{\prime}\gets R_i + K\cdot (W_{ij} - P(R_i,R_j))$ $R_j^\prime \gets R_j + K\cdot (W_{ji} - P(R_j,R_i))$ +end for +Output: ELO Score $(R_1,\dots ,R_N)$ + +represent the comparative dataset we have collected, where $N$ is the number of models and $M$ is the number of annotators. Each element $(i,j,k,W_{ij})\in S$ indicates that model $i$ and model $j$ engaged in a battle, and annotator $k$ provided the result $W_{ij}$ . Specifically, $W_{ij} = 1$ indicates that model $i$ won the battle, $W_{ij} = 0$ indicates that model $j$ won, and $W_{ij} = 0.5$ indicates a tie. The goal of the arena-based evaluation is to estimate the ranking scores $R = (R_1,\dots ,R_N)$ for the models based on the record $S$ . + +ELO Rating System The ELO rating system is a widely used method for ranking competitors based on pairwise comparisons. In the ELO system, each competitor (or model) is assigned a rating $R$ , which represents its relative strength. When two models, $i$ and $j$ , compete, their respective ratings, $R_i$ and $R_j$ , are used to calculate the expected probability of each outcome: $P(R_i, R_j) = P(W_{ij} = 1) = \frac{1}{1 + e^{-C(R_i - R_j)}}$ , where $C$ is a constant that scales the difference in ratings. After observing the actual outcome of the match, the ratings are updated as follows: + +$$ +\begin{array}{l} R _ {i} ^ {\prime} = R _ {i} + K \cdot \left(W _ {i j} - P \left(R _ {i}, R _ {j}\right)\right), \\ R _ {j} ^ {\prime} = R _ {j} + K \cdot \left(W _ {j i} - P \left(R _ {j}, R _ {i}\right)\right), \tag {1} \\ \end{array} +$$ + +where $K$ is a scaling factor that controls the magnitude of rating changes. The pseudo-code for this process is shown in Algorithm 1. However, the existing ELO method is iterative and highly sensitive to the order of the data. This is irrational for LLMs' evaluation because evaluation can be seen as a static process (Zhan et al., 2024). Specifically, the errors introduced by the ELO method arise from the algorithm's dynamics rather than the data itself, which undermines the statistical significance of the ELO scores for many models. + +Moreover, current algorithms do not account for differences in annotator abilities. They treat all annotators as if they have the same ability $C$ , mixing annotation records randomly. This assumption can introduce bias and instability into the evaluation process. + +# 4. Better Performance estimation with ELO + +Earlier, we introduced the traditional ELO method and highlighted its key challenges, including ranking inconsistencies and the lack of consideration for annotator variability. To address these issues, this section presents a stable arena framework with improvements to the ELO method. + +# 4.1. MLE for ELO (m-ELO) Estimation + +The traditional ELO rating estimation method is based on an iterative algorithm, and the results are highly dependent on the order of the samples. This explains why ELO ratings often lack consistency. Inspired by the insensitivity of maximum likelihood estimation (MLE) to the sample order, we propose an MLE-driven ELO estimation algorithm, termed m-ELO. Specifically, for the record dataset $S$ , its log-likelihood function can be expressed as follows: + +$$ +\ln L = \sum_ {(i, j, W _ {i j}) \in S} W _ {i j} \ln P \left(R _ {i}, R _ {j}\right) + W _ {j i} \ln P \left(R _ {j}, R _ {i}\right), \tag {2} +$$ + +where $P(R_{i},R_{j}) = \frac{1}{1 + e^{-C(R_{i} - R_{j})}}$ . The result of the MLE method, $(R_1^*,R_2^*,\ldots ,R_N^*)$ , can be obtained by solving the extreme point of the log-likelihood function using gradient descent. Specifically, for any given model $n\in [N]$ , the gradient of the log-likelihood function with respect to its rating $R_{n}$ is: + +$$ +\frac {\partial \ln L}{\partial R _ {n}} = \sum_ {(n, j, W _ {n j}) \in S} C \left(W _ {n j} - P \left(R _ {n}, R _ {j}\right)\right), \tag {3} +$$ + +By comparing Equations 1 and 3, we observe that the two formulas share a consistent structure. This highlights the essence of the ELO algorithm: it performs gradient descent with a learning rate of $\frac{K}{C}$ on the MLE for each annotated sample. Gradient descent based on individual samples rarely converges, which reveals a key shortcoming of the traditional ELO method. + +Convergence Analysis Although the estimation results of the MLE method are not influenced by the sample order, another important consideration is whether the log-likelihood function has only one extreme point. If multiple extreme points exist, it could still lead to inconsistencies in the ELO rankings. Unfortunately, because ELO scores are relative, it is clear that if $(R_1^*, R_2^*, \ldots, R_N^*)$ is an extreme point, then $(R_1^* + \epsilon, R_2^* + \epsilon, \dots, R_N^* + \epsilon)$ is also an extreme point. Thus, the extreme points of the log-likelihood function are not unique. However, when we fix the score of one of the models, we obtain the following theorem (Zermelo, 1929): + +Theorem 4.1. Assume $R_0 = 0$ and $|S|$ is sufficiently large, then the log-likelihood function $\ln L$ with respect to $(R_2, \dots, R_N)$ is a concave function and has at most one extreme point. + +Drawing from Theorem 4.1, we can assert that the ELO score obtained through the MLE method is relatively stable between models, meaning that the difference in ability between any two models remains stable. + +Replacing the iterative method with the MLE approach makes the ELO method more flexible. Additionally, it allows us to model annotator abilities during the evaluation process. In the next section, we will adopt ideas from psychometrics to propose a feasible modeling approach and analyze its interpretability. + +# 4.2. Annotator Modeling m-ELO (am-ELO) Estimation + +Although ability modeling is not commonly seen in LLM evaluation, many ability modeling methods have been developed in education and psychometrics (Liu et al., 2021; Wang et al., 2022; Zhang et al., 2024b; Zhuang et al., 2022; Liu et al., 2024). One prominent method is Item Response Theory (IRT) (Embretson & Reise, 2013; Zhu et al., 2022; Nguyen & Zhang, 2022; Polo et al., 2024). IRT posits that an examinee's performance on a test depends solely on its ability $\theta$ and the properties of the questions. The standard model is the two-parameter logistic (2PL) model, defined as: $P_{j}(\theta) = P(y_{j} = 1) = \frac{1}{1 + e^{-a_{j}(\theta - b_{j})}}$ , where $y_{j} = 1$ indicates a correct response to question $j$ , and $a_{j}$ and $b_{j} \in \mathbb{R}$ represent the discrimination and difficulty of question $j$ . + +As noted, the parameter $a$ in IRT can be interpreted as the discrimination parameter. Similarly, in the ELO method, the fixed value $C$ can also be understood as the discrimination parameter. To account for annotator variability, we replace the fixed value $C$ in the probability density estimation with a parameter $\theta_{k}$ that is specific to annotator $k$ : + +$$ +P \left(R _ {i}, R _ {j} \mid \theta_ {k}\right) = \frac {1}{1 + e ^ {- \theta_ {k} \left(R _ {i} - R _ {j}\right)}}, \tag {4} +$$ + +This new formulation has the following properties: + +- Maintain symmetry: The symmetry to the model's abilities $R_{i}$ and $R_{j}$ is preserved even after modifying the constant $C$ to an annotator-related parameter $\theta_{k}$ , such that $P(R_{i}, R_{j}|\theta_{k}) + P(R_{j}, R_{i}|\theta_{k}) = 1$ +- Discriminative ability ( $\theta_{k} > 0$ ): When the abilities of two models are identical, the change in win probability caused by small variations in ability values is positively correlated with annotator's ability $\theta_{k} = 4\frac{\partial P(R_{i},r)}{\partial R_{i}}\big|_{R_{i} = r}$ . Therefore, the annotator's ability $\theta_{k}$ represents the maximum discriminative ability. +- Anomalous annotator $(\theta_{k} < 0)$ : When the discriminative ability $\theta_{k}$ , it is observed that for any model $i$ with greater ability than model $j$ , annotator $k$ perceives the probability of model $i$ winning as less than 0.5. This indicates that it is an anomalous annotator. + +To estimate the parameters of the probability function, we consider its logarithmic likelihood function similarly: + +$$ +\sum_ {(i, j, k, W _ {i j}) \in S} W _ {i j} \ln P \left(R _ {i}, R _ {j} \mid \theta_ {k}\right) + W _ {j i} \ln P \left(R _ {j}, R _ {i} \mid \theta_ {k}\right). \tag {5} +$$ + +After modifying the probability function, we need to account for both the ELO scores of the models $R = (R_{1},\dots ,R_{N})$ and the abilities of the annotators $\Theta = (\theta_{1},\ldots ,\theta_{M})$ during gradient descent. For a model $n\in [N]$ and annotator $m\in [M]$ , the gradients of $\ln L$ to them can be expressed as: + +$$ +\frac {\partial \ln L}{\partial R _ {n}} = \sum_ {(x, j, k, W _ {n j}) \in S} \theta_ {k} \left(W _ {n j} - P \left(R _ {n}, R _ {j} \mid \theta_ {k}\right)\right), +$$ + +$$ +\frac {\partial \ln L}{\partial \theta_ {m}} = \sum_ {(i, j, m, W _ {i j}) \in S} \left(R _ {i} - R _ {j}\right) \left(W _ {i j} - P \left(R _ {i}, R _ {j} \mid \theta_ {m}\right)\right). \tag {6} +$$ + +This method allows us to simultaneously estimate the annotators' abilities during the MLE process. Beyond the concept of discrimination introduced by the improved probability function, we should also explore the practical significance of this ability estimation in the context of the arena. Through analysis, we find that the estimated annotator ability $\theta_{k}$ exhibits the following two properties: + +Theorem 4.2. Given that $\theta$ represents the ability of annotators estimated by am-ELO, the following conclusions can be drawn: + +(1) If two annotators label the same set of samples $W_{ij}$ , $W_{ij}'$ with abilities $\theta_1$ and $\theta_2$ ( $\theta_2 > \theta_1$ ), then: + +$$ +\sum_ {(i, j, W _ {i j}) \in S ^ {\prime}} (R _ {i} - R _ {j}) W _ {i j} < \sum_ {(i, j, W _ {i j} ^ {\prime}) \in S ^ {\prime}} (R _ {i} - R _ {j}) W _ {i j} ^ {\prime}. +$$ + +(2) If $\theta_{k} < 0$ , for each positive sample $(i,j,k,1)$ of annotator $k$ , its loss $\frac{\partial\ln l}{\partial R_i} < 0$ , and for each negative sample $(i,j,k,0)$ of annotator $k$ , $\frac{\partial\ln l}{\partial R_i} > 0$ . + +From Theorem 4.2, it is evident that the annotator abilities derived from MLE have practical significance. Specifically, $\sum_{(i,j,W_{ij})\in S}(R_i - R_j)W_{ij}$ can be interpreted as the correlation between the annotations $W_{ij}$ and the rankings $R_i - R_j$ . Theorem 4.2 (1) implies that a higher annotator ability corresponds to a greater value of $\sum_{(i,j,W_{ij})\in S'}(R_i - R_j)W_{ij}$ , meaning that a larger $\theta_k$ indicates that the annotations from annotator $k$ are more consistent with the overall rankings. Meanwhile, Theorem 4.2 (2) suggests that an annotator with negative ability might annotate inconsistently or arbitrarily, and am-ELO can identify these anomalous annotators. + +Normalization Although this method has strong interpretability for modeling annotators, it is not difficult to observe that, for such an optimization problem, if $(R_1^*,\dots ,R_N^*,\theta_1^*,\dots ,\theta_M^*)$ is an extreme point, then + +Algorithm 2 The am-ELO Rating System +Input: Dataset $S$ , Learning Rate $\alpha$ , Epoch Epoch +Initialize: Score of Models $R$ and annotators' ability $\Theta$ +for $t = 1$ to Epoch do Calculate MLE: $\ln L\gets \mathrm{MLE}(R,\Theta ,S)$ Optimize: $R\gets R + \alpha \frac{\partial\ln L}{\partial R},\Theta \gets \Theta +\alpha \frac{\partial\ln L}{\partial\Theta}$ Normalization: $\Theta \leftarrow \frac{\Theta}{\mathbf{1}^T\cdot\Theta}$ +end for +Output: ELO Score and annotators' ability $(R,\Theta)$ + +Algorithm 3 The Stable Arena framework +Input: Learning Rate $\alpha$ , Epoch Epoch, Ability Threshold $\epsilon$ . +Initialize: Dataset $S \gets \emptyset$ , Data quantity threshold $\delta$ . +while True do + $S \gets S \cup S_{new}$ +for $k = 1$ to $M$ do + $\delta_k = |\{(i,j,x,W_{ij})|x = k\}|$ +end for + $S' \gets \{(i,j,k,W_{ij})|\delta_k > \delta\}$ $(R,\Theta) \gets \text{am-ELO}(S',\alpha, \text{Epoch})$ $(R_1, \dots, R_N) = R$ +Output: ELO Score $(R_1, \dots, R_N)$ $(\theta_1, \dots, \theta_N) = \Theta$ +Filter annotators: $S \gets \{(i,j,k,W_{ij})|\theta_k > \epsilon\}$ +end while + +$(\alpha R_1^*, \dots, \alpha R_N^*, \frac{1}{\alpha} \theta_1^*, \dots, \frac{1}{\alpha} \theta_M^*)$ is also an extreme point. Thus, when $\alpha < 0$ , the model score ranking will be completely reversed, leading to potential instability. To mitigate this issue, we impose a constraint on the annotator's ability: + +$$ +\theta_ {1} + \theta_ {2} + \dots + \theta_ {M} = 1. \tag {7} +$$ + +From Theorem 4.2 (2), we know that $\theta_{k} > 0$ corresponds to users who annotate normally. The significance of this normalization operation is essentially based on the assumption that the majority of annotators in the group are labeling responsibly (Nowak & Rüger, 2010). Based on this assumption, we determine whether the model rankings should follow the original order or be reversed. + +# 4.3. Stable Arena Framework + +Algorithm 2 presents the pseudo-code for the am-ELO algorithm. The am-ELO algorithm performs gradient descent (Ruder, 2016) on the negative log-likelihood function over the entire dataset to find the extreme point, ultimately returning both the model scores and annotator abilities. Specifically, when considering only the m-ELO algorithm, the concavity of its log-likelihood function enables the use of Newton's method (Galantai, 2000; Kelley, 2003) during optimization. This allows for dynamic adjustment of the learning rate, thereby improving convergence efficiency. + +Building on the improvements to the ELO method discussed earlier, we introduce the Stable Arena Framework, a novel paradigm for arena-based evaluation, as detailed in Algorithm 3. To ensure more robust evaluations, we carefully screen the annotated data both before and after applying the am-ELO method. Specifically, upon incorporating new annotation samples, we first filter out annotators who have fewer than $\delta$ annotation records. This is crucial because annotators with fewer records tend to produce less reliable results. However, this does not imply permanent exclusion; once such annotators accumulate a sufficient number of annotations, their records will be reconsidered. + +After evaluating both models and annotators, we further refine the process by filtering annotators based on their estimated abilities. Annotators with negative ability values, or those with ability values below a threshold $\epsilon$ , are deemed detrimental to the evaluation process. For these annotators, we either issue warnings or exclude them entirely from further evaluations. Moreover, since a higher $\theta$ indicates greater consistency between the annotations and the overall ranking, the LLM evaluation platform can reward annotators proportionally to their demonstrated abilities. + +# 5. Experiments + +In this section, we introduce and compare the performance of our proposed method with the traditional ELO method in predicting annotation results, highlighting the superior modeling capability of am-ELO. Additionally, we demonstrate the limitations of the traditional ELO method through a comparison of model rankings produced by various ELO methods and a case study. Next, we validate the convergence of the ELO rankings generated by our method, further reinforcing the validity of our approach for evaluating LLMs. Finally, to assess the stability of the ELO method, we apply four different strategies to perturb the annotators. Our results show that our method not only maintains stability in the tests but also effectively identifies anomalous annotators, emphasizing the superiority of our approach. + +# 5.1. Dataset + +We conduct experiments on a real annotation dataset, Chatbot (Zheng et al., 2023), which was collected from 13,000 distinct IP addresses in the Chatbot Arena between April and June 2023. The dataset consists of 33,000 curated conversations with pairwise human preferences. Each entry includes a question ID, the names of two models, their full conversation transcripts, the annotator's vote, and its ID. Due to the requirement for MLE in this experiment, individual samples may introduce instability. Consequently, we excluded annotator samples with fewer than 50 annotated records. The statistical information of the filtered dataset is shown in Table 1. + +![](images/0fe525e622853c1af579a3b01315f43f9033f849c76d15564f2a6ddad802554b.jpg) +Figure 3. The result of each LLMs on different evaluation method. Specifically, the line chart represents the normalized ELO scores $\uparrow$ (ranging from 0 to 1) of each LLM under different evaluation methods. The bar chart represents the Loss $\downarrow$ (log-likelihood function) of each LLM's match records under different evaluation methods. + +Table 1. Statistics of the dataset + +
DatasetChatbot
#Annotators42
#Models20
#Response logs4321
#Response logs per annotator102.88
#Response logs per model216.05
#Response logs per model pair22.74
+ +# 5.2. Setting + +In this experiment, we consider a baseline model, the traditional ELO method, alongside two methods we proposed: m-ELO and am-ELO. For the iterative ELO method, we perform repeated experiments by shuffling the dataset 1000 times and averaging the results. The MLE is solved using the gradient descent (GD) approach with a learning rate of 0.1 and a fixed number of 2000 iterations. The code can be found in the github: https://github.com/bigdata-ustc/am-ELO. + +# 5.3. Result and Case Study + +The bar chart in Figure 3 presents the mean log-likelihood loss for each method. As shown, the loss difference between m-ELO and ELO, which share the same probability function, is minimal, while the loss for am-ELO is significantly lower than the other two methods. This indicates that am-ELO demonstrates better fitting ability. Furthermore, as shown in Table 2, am-ELO significantly outperforms the other two baseline models in prediction tasks, suggesting that am-ELO exhibits superior generalization ability. This + +Table 2. The Performance of ELO method for prediction. + +
MethodMSE↓AUC↑
ELO0.1238±0.00310.7492±0.0068
m-ELO0.1234±0.00290.7503±0.0066
am-ELO0.1208±0.00340.7581±0.0067
+ +![](images/ff6c7ce42c0132139aee0383d64c6b5195a88cf206a05c0905c8daaa5717c346.jpg) +Figure 4. The heatmap shows the number of victories in battles between various models (Three models with similar abilities, koala-13b, vicuna-7b, gpt-13b, and the better or worse models than them). Each number in the figure represents the times the row model wins the column model in the battle. + +also demonstrates that the improved probability function effectively models the annotators. + +Meanwhile, the line chart in Figure 3 illustrates the ELO scores obtained from the three ELO methods. It is clear that the ranking trends of our proposed methods align closely with the traditional ELO method. However, there are some differences in the rankings of specific models, such as koala-13b, vicuna-7b, and gpt-13b. + +To analyze these models with similar abilities, we categorize + +![](images/7d3dcfd6d7cf318329f3af91bcd42de4cfa41aee7b7ae934f171954272cf86bf.jpg) +Figure 5. The Loss and Consistency of the evaluation method at each epoch on the Chatbot dataset. + +the remaining models into two groups based on their ELO scores: "Better" and "Worse", representing models that are better or worse than the aforementioned models. We visualize the number of matches between these models. As shown in Figure 4, each number represents the number of times the model in the row defeated the model in the column. For example, the first row and third column indicate that vicuna-7b lost to "Better" models 148 times. From this, we observe that although the head-to-head records between koala-13b and vicuna-7b do not differentiate their abilities, both models defeated the same number of "Better" models. Meanwhile, vicuna-7b lost to fewer "Better" and "Worse" models. Based on this result, we conclude that vicuna-7b is stronger than koala-13b, which aligns with the rankings provided by both am-ELO and m-ELO. + +However, due to Koala-13B's large number of victories over "Worse" models, the traditional ELO method disproportionately weighs these victories during the scoring process, ultimately ranking Koala-13B higher than Vicuna-7B. This issue suggests that avoiding strong opponents and repeatedly defeating weaker ones could artificially inflate a model's ranking, which is an undesirable outcome. + +# 5.4. The Convergence and Efficiency of ELO Methods + +In this subsection, we discuss the convergence and efficiency of the proposed am-ELO. Our comparison methods not only three mentioned model but also am-ELO (w/o Norm), where normalization is not performed during training. To analyze the convergence and efficiency of the results obtained by each evaluation method, we record the loss (Loss) during the gradient descent process. Additionally, we perform five random initializations of the model parameters and calculate the consistency of the rankings (Consistency) (Hastie & Tibshirani, 1997) of the ELO scores output by these five models at each epoch. It should be noted that the iterative process of the traditional ELO method differs from the gradient descent approach of MLE. Therefore, we directly record the final output loss and consistency for the traditional ELO method. The results are shown in Figure 5. + +As observed from the loss, the three MLE-based methods + +all converge to a local minimum within a limited number of iterations. The loss at convergence for m-ELO is nearly identical to that of the traditional ELO, which is expected since both methods share the same probability estimation function. This once again demonstrates that m-ELO and traditional ELO are essentially equivalent. Moreover, amELO (w/o Norm) converges the fastest, followed by amELO, with m-ELO being the slowest. This is because amELO has more adjustable parameters compared to m-ELO, and am-ELO (w/o Norm) benefits from fewer constraints during the gradient descent process. However, as seen from the consistency, am-ELO (w/o Norm) quickly converges to different local minima, and its consistency stabilizes at 0.4. This suggests that the five outputs of this method exhibit two ordered sequences and three reversed sequences $\left(\frac{C_2^2 + C_3^2}{C_5^2} = 0.4\right)$ . On the other hand, am-ELO not only achieves stable rankings after sufficient gradient descent iterations but does so more efficiently than m-ELO. This demonstrates that the proposed am-ELO method strikes a balance between convergence and efficiency. + +# 5.5. The Stability of Various ELO Methods + +Since directly verifying the stability of the am-ELO method during the evaluation process is challenging, we use simulation experiments to introduce perturbation to the annotators. Specifically, we perturb the annotators' results using four strategies to simulate the presence of anomalous annotators that may occur during testing: + +- Random: If model A wins, the result will have a $50\%$ chance of being changed to "Tie" and a $50\%$ chance of being changed to "model B wins", vice versa. +- Equal: All results are changed to "Tie". +- Flip: If model A wins, the result will be flipped to "model B wins", and vice versa. The outcome "Tie" remains unchanged. +- Mixed: A random selection is made from the first three perturbation strategies for each instance. + +These perturbations mimic scenarios where intentional mislabeling occurs in annotations. Considering that the majority of annotators in the arena will annotate normally, the number of perturbations in our simulation experiment will not exceed half of the total number of annotators. We expect a stable scoring method to have two key properties: (1) it should produce ELO rankings consistent with those without perturbations, and (2) it should identify the anomalous annotators. The ground truth for the consistency of the ELO score is the pairwise comparison between the ELO rankings with and without perturbations, and the ground truth for identifying anomalous annotators is the F1-score + +![](images/bc6b542566994e5af2185ab974b5ee8cb370b5eeb974970901d4e40b206c2d09.jpg) +Figure 6. This figure contains four line charts and four scatter plots, corresponding to the ELO score consistency under the four types of perturbation, as well as the changes in annotator abilities obtained from am-ELO as the level of perturbation increases. + +![](images/3546ccdc4e6a4a7c61681b7ae84d785e09cc9918681cb7d2ead93cf70d0ccc3f.jpg) +Figure 7. The line chart of F1-score for outlier detection at different thresholds under the four types of perturbation. + +(Chen & Lin, 2006) of the annotators' abilities obtained from am-ELO. A higher F1-score indicates better accuracy in detecting the perturbations. + +The line charts in Figure 6 show the relationship between the ratio of perturbations and the consistency of ELO scores. We observe that am-ELO maintains higher consistency across various types of perturbations. Specifically, aside from the fact that the "Equal" perturbation itself is unlikely to affect rankings, leading to high consistency across all ELO methods, in the other three perturbation scenarios, am-ELO reduces the inconsistency rate to $30\%$ compared to m-ELO or traditional ELO. Meanwhile, the scatter plot at the bottom of Figure 6 shows the changes in annotator abilities under each perturbation. Red dots represent annotators who were normal, while green dots represent those who were anomalous. It is clear that nearly all anomalous annotators + +have ability scores below 0, indicating they are identified as noise points. Additionally, Figure 7 presents the F1-scores for detecting perturbations under thresholds of 0 and 0.005. Under different perturbations, the recognition accuracy reached $90\%$ when $\epsilon = 0$ , and even up to $95\%$ when $\epsilon = 0.005$ . These results demonstrate that our method effectively detects perturbations, models the annotators, and maintains the consistency of results, thereby alleviating the problem of ELO inconsistency. + +# 6. Conclusion + +In this study, we explored the instability of the ELO method in the context of LLM evaluation, emphasizing its impact on the reliability of evaluation outcomes. To address this issue, we introduced the Stable Arena Framework, which utilizes the MLE approach for ELO rating estimation and incorporates annotator ability parameters into the probability function. Our experiments demonstrated that am-ELO not only achieves more stable convergence but also effectively identifies anomalous annotators, resulting in rankings that are more aligned with human intuition. These findings suggest that our approach can significantly reduce the instability of ELO, enhancing the credibility and robustness of LLM evaluation, while providing a more stable and easily implementable framework for arena-based evaluation. + +However, our method has certain limitations. Specifically, the dimensions of annotator modeling are somewhat simplistic, as it primarily captures the annotator's discriminatory ability and consistency with other annotators. This makes it challenging to fully capture the annotator's broader capabilities. In future work, we aim to refine the design of annotator ability dimensions to better leverage crowdsourcing for arena-based evaluation. + +# Acknowledgements + +This research was supported by grants from the National Key Research and Development Program of China (Grant No. 2024YFC3308200), the National Natural Science Foundation of China (62337001), the Key Technologies R & D Program of Anhui Province (No. 202423k09020039), China Postdoctoral Science Foundation (Grant No. 2024M760725) and the Fundamental Research Funds for the Central Universities. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. +Aldous, D. Elo ratings and the sports model: A neglected topic in applied probability? 2017. +Anthropic, S. Model card addendum: Claude 3.5 haiku and upgraded claude 3.5 sonnet. URL https://api-semanticscholar.org/CorpusID:273639283. +Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., Das-Sarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. +Boubdir, M., Kim, E., Ermis, B., Hooker, S., and Fadaee, M. Elo uncovered: Robustness and best practices in language model evaluation. arXiv preprint arXiv:2311.17295, 2023. +Bouchiha, M. A., Telnoff, Q., Bakkali, S., Champagnat, R., Rabah, M., Coustaty, M., and Ghamri-Doudane, Y. Llmchain: Blockchain-based reputation system for sharing and evaluating large language models. In 2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 439-448, 2024. doi: 10.1109/COMPSAC61105.2024.00067. +Boyd, S. and Vandenberghe, L. Convex optimization. Cambridge university press, 2004. +Busa-Fekete, R., Hüllermeier, E., and Szörenyi, B. Preference-based rank elicitation using statistical mod + +els: The case of mallows. In International conference on machine learning, pp. 1071-1079. PMLR, 2014. +Chan, C., Chen, W., Su, Y., Yu, J., Xue, W., Zhang, S., Fu, J., and Liu, Z. Chateval: Towards better llm-based evaluators through multi-agent debate. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=FQepisCUWu. +Chen, Y.-W. and Lin, C.-J. Combining svms with various feature selection strategies. Feature extraction: foundations and applications, pp. 315-324, 2006. +Cheng, M., Zhang, H., Yang, J., Liu, Q., Li, L., Huang, X., Song, L., Li, Z., Huang, Z., and Chen, E. Towards personalized evaluation of large language models with an anonymous crowd-sourcing platform. In Companion Proceedings of the ACM Web Conference 2024, pp. 1035-1038, 2024. +Cheng, M., Luo, Y., Ouyang, J., Liu, Q., Liu, H., Li, L., Yu, S., Zhang, B., Cao, J., Ma, J., et al. A survey on knowledge-oriented retrieval-augmented generation. arXiv preprint arXiv:2503.10677, 2025. +Chernoff, H. Sequential design of experiments. Springer, 1992. +Chiang, W., Zheng, L., Sheng, Y., Angelopoulos, A. N., Li, T., Li, D., Zhu, B., Zhang, H., Jordan, M. I., Gonzalez, J. E., and Stoica, I. Chatbot arena: An open platform for evaluating llms by human preference. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=3MW8GKNyzI. +Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. +Coulom, R. Computing "elo ratings" of move patterns in the game of go. ICGA journal, 30(4):198-208, 2007. +Ebtekar, A. and Liu, P. An elo-like system for massive multiplayer competitions. arXiv preprint arXiv:2101.00400, 2021. +Eickhoff, C. Cognitive biases in crowdsourcing. In Proceedings of the eleventh ACM international conference on web search and data mining, pp. 162-170, 2018. +Elo, A. E. The proposed uscf rating system, its development, theory, and applications. *Chess life*, 22(8):242-247, 1967. + +Embretson, S. E. and Reise, S. P. Item response theory. Psychology Press, 2013. +Furr, R. M. Psychometrics: an introduction. SAGE publications, 2021. +Galantai, A. The theory of newton's method. Journal of Computational and Applied Mathematics, 124(1-2):25-44, 2000. +Hastie, T. and Tibshirani, R. Classification by pairwise coupling. Advances in neural information processing systems, 10, 1997. +Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. +Hunter, D. R. Mm algorithms for generalized bradley-terry models. The annals of statistics, 32(1):384-406, 2004. +Jin, Y., Choi, M., Verma, G., Wang, J., and Kumar, S. Mmsoc: Benchmarking multimodal large language models in social media platforms. In ACL, 2024a. +Jin, Y., Zhao, Q., Wang, Y., Chen, H., Zhu, K., Xiao, Y., and Wang, J. Agentreview: Exploring peer review dynamics with llm agents. In EMNLP, 2024b. +Johnson, C. R. Positive definite matrices. The American Mathematical Monthly, 77(3):259-264, 1970. +Kelley, C. T. Solving nonlinear equations with Newton's method. SIAM, 2003. +Li, C., Shi, L., Zhou, C., Huan, Z., Tang, C., Zhang, X., Wang, X., Zhou, J., and Liu, S. A merge sort based ranking system for the evaluation of large language models. In Bifet, A., Krilavicius, T., Miliou, I., and Nowaczyk, S. (eds.), Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track, pp. 240-255, Cham, 2024. Springer Nature Switzerland. ISBN 978-3-031-70378-2. +Li, J., Sun, S., Yuan, W., Fan, R.-Z., Zhao, H., and Liu, P. Generative judge for evaluating alignment. arXiv preprint arXiv:2310.05470, 2023. +Liang, P., Bommasani, R., Lee, T., Tsipras, D., Soylu, D., Yasunaga, M., Zhang, Y., Narayanan, D., Wu, Y., Kumar, A., et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022. +Liu, Q., Huang, Z., Yin, Y., Chen, E., Xiong, H., Su, Y., and Hu, G. Ekt: Exercise-aware knowledge tracing for student performance prediction. IEEE Transactions on Knowledge and Data Engineering, 33(1):100-115, 2021. + +Liu, Z., Yan, Z., Liu, Q., Li, J., Zhang, Y., Huang, Z., Wu, J., and Wang, S. Computerized adaptive testing via collaborative ranking. In Neural Information Processing Systems, 2024. URL https://api_semanticscholar.org/CorpusID:276259892. +Morizot, J., Ainsworth, A. T., and Reise, S. P. Toward modern psychometrics. Handbook of research methods in personality psychology, 407, 2009. +Nguyen, D. and Zhang, A. Y. A spectral approach to item response theory. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 38818-38830. Curran Associates, Inc., 2022. +Nowak, S. and Rüger, S. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In Proceedings of the international conference on Multimedia information retrieval, pp. 557-566, 2010. +Ouyang, J., Pan, T., Cheng, M., Yan, R., Luo, Y., Lin, J., and Liu, Q. Hoh: A dynamic benchmark for evaluating the impact of outdated information on retrieval-augmented generation, 2025. URL https://arxiv.org/abs/2503.04800. +O'Meara, O. T. Introduction to quadratic forms, volume 117. Springer, 2013. +Pelánek, R. Applications of the elo rating system in adaptive educational systems. Computers & Education, 98:169-179, 2016. +Polo, F. M., Weber, L., Choshen, L., Sun, Y., Xu, G., and Yurochkin, M. tinybenchmarks: evaluating llms with fewer examples. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=qAml3FpfhG. +Rao, P. and Kupper, L. L. Ties in paired-comparison experiments: A generalization of the bradley-terry model. Journal of the American Statistical Association, 62(317): 194-204, 1967. +Raykar, V. C. and Yu, S. Ranking annotators for crowdsourced labeling tasks. Advances in neural information processing systems, 24, 2011. +Ruder, S. An overview of gradient descent optimization algorithms. ArXiv, abs/1609.04747, 2016. URL https://api(semanticscholar.org/CorpusID:17485266. +Shi-gu, J. Application of lagrange mean value theorem. 2014. URL https://api-semanticscholar.org/CorpusID:124971556. + +Sismanis, Y. How i won the" chess ratings-elo vs the rest of the world" competition. arXiv preprint arXiv:1012.4571, 2010. +Szörenyi, B., Busa-Fekete, R., Paul, A., and Hüllermeier, E. Online rank elicitation for plackett-luce: A dueling bandits approach. Advances in neural information processing systems, 28, 2015. +Thacker, W. C. The role of the hessian matrix in fitting models to measurements. Journal of Geophysical Research: Oceans, 94(C5):6177-6196, 1989. +Wang, F., Liu, Q., Chen, E., Huang, Z., Yin, Y., Wang, S., and Su, Y. Neuralcd: a general framework for cognitive diagnosis. IEEE Transactions on Knowledge and Data Engineering, 2022. +Wang, Y., Yu, Z., Zeng, Z., Yang, L., Wang, C., Chen, H., Jiang, C., Xie, R., Wang, J., Xie, X., et al. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv preprint arXiv:2306.05087, 2023. +Welinder, P. and Perona, P. Online crowdsourcing: rating annotators and obtaining cost-effective labels. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 25-32. IEEE, 2010. +Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., and Choi, Y. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. +Zermelo, E. Die berechnung der turnier-ergebnisse als ein maximumproblem der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 29(1):436-460, 1929. +Zhan, J., Wang, L., Gao, W., Li, H., Wang, C., Huang, Y., Li, Y., Yang, Z., Kang, G., Luo, C., et al. Evaluatology: The science and engineering of evaluation. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 4(1):100162, 2024. +Zhang, Y., Zhang, M., Yuan, H., Liu, S., Shi, Y., Gui, T., Zhang, Q., and Huang, X. Llmeval: A preliminary study on how to evaluate large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 19615-19622, 2024a. +Zhang, Z., Wu, L., Liu, Q., Liu, J.-Y., Huang, Z., Yin, Y., Yan, Z., Gao, W., and Chen, E. Understanding and improving fairness in cognitive diagnosis. Sci. China Inf. Sci., 67, 2024b. URL https://api-semanticscholar.org/CorpusID:269473652. + +Zhao, Q., Wang, J., Zhang, Y., Jin, Y., Zhu, K., Chen, H., and Xie, X. Competeai: Understanding the competition behaviors in large language model-based agents. In ICML, 2024. +Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E. P., Zhang, H., Gonzalez, J. E., and Stoica, I. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. +Zhu, L., Wang, X., and Wang, X. Judgelm: Fine-tuned large language models are scalable judges. arXiv preprint arXiv:2310.17631, 2023. +Zhu, Z., Arthur, D., and Chang, H.-H. A new person-fit method based on machine learning in cdm in education. British Journal of Mathematical and Statistical Psychology, 75(3):616-637, 2022. +Zhuang, Y., Liu, Q., Huang, Z., Li, Z., Shen, S., and Ma, H. Fully adaptive framework: Neural computerized adaptive testing for online education. In Proceedings of the AAAI conference on artificial intelligence, volume 36, pp. 4734-4742, 2022. + +# A. Proofs of Theorem 4.1 + +Proof. Assume $R_{1} = 0$ and consider the remaining variables $(R_{2},\dots ,R_{N})$ . For each sample $(i,j,W_{ij})$ , consider the log-likelihood function $\ln l$ for this sample is given by: + +$$ +\ln l = W _ {i j} \ln P \left(R _ {i}, R _ {j}\right) + W _ {j i} \ln P \left(R _ {j}, R _ {i}\right). +$$ + +The second-order partial derivatives of $\ln l$ are: + +$$ +\frac {\partial^ {2} \ln l}{\partial R _ {i} ^ {2}} = - C ^ {2} P (R _ {i}, R _ {j}) (1 - P (R _ {i}, R _ {j})), i \neq 1, +$$ + +$$ +\frac {\partial^ {2} \ln l}{\partial R _ {i} \partial R _ {j}} = C ^ {2} P (R _ {i}, R _ {j}) (1 - P (R _ {i}, R _ {j})), i, j \neq 1, +$$ + +Now, let the number of matches between model $i$ and model $j$ be $\delta_{ij}$ and define $a_{ij} = \delta_{ij}C^2 P(R_i,R_j)(1 - P(R_i,R_j))$ . For the Hessian matrix (Thacker, 1989) of the log-likelihood function $\frac{\partial^2\ln L}{\partial\mathbf{R}\partial\mathbf{R}^T}$ , its quadratic form (O'Meara, 2013) can be expressed as: + +$$ +\mathbf {x} \frac {\partial^ {2} \ln L}{\partial \mathbf {R} \partial \mathbf {R} ^ {T}} \mathbf {x} ^ {T} = - \sum_ {i = 2} ^ {N} \sum_ {j = 2} ^ {N} a _ {i j} (x _ {i} - x _ {j}) ^ {2} - \sum_ {i = 2} ^ {N} a _ {i 1} x _ {i} ^ {2} - \sum_ {j = 2} ^ {N} a _ {1 j} x _ {j} ^ {2}. +$$ + +Note that $a_{ij} \geq 0$ , therefore: + +$$ +\mathbf {x} \frac {\partial^ {2} \ln L}{\partial \mathbf {R} \partial \mathbf {R} ^ {T}} \mathbf {x} ^ {T} \leq 0. +$$ + +The equality holds if and only if $x_{i} = x_{j} = 0$ , i.e. $\mathbf{x} = \mathbf{0}$ . Since the quadratic form is strictly negative for all non-zero vectors $\mathbf{x}$ , the Hessian matrix $\frac{\partial^2\ln L}{\partial\mathbf{R}\partial\mathbf{R}^T}$ is negative definite (Johnson, 1970). This implies that the log-likelihood function $\ln L$ is concave. Therefore, $\ln L$ can have at most one extreme point (Boyd & Vandenberghe, 2004), ensuring the uniqueness of the maximum likelihood solution. + +# B. Proofs of Theorem 4.2 + +Proof. (1) For annotators 1 and 2, the following formula can be obtained from Equation 6: + +$$ +\frac {\partial \ln L}{\partial \theta_ {1}} = \sum_ {(i, j, W _ {i j}) \in S ^ {\prime}} (R _ {i} - R _ {j}) \left(W _ {i j} - P \left(R _ {i}, R _ {j} \mid \theta_ {1}\right)\right) +$$ + +$$ +\frac{\partial\ln L}{\partial\theta_{2}} = \sum_{(i,j,W^{\prime}_{ij})\in S^{\prime}}(R_{i} - R_{j})(W^{\prime}_{ij} - P(R_{i},R_{j}|\theta_{2})) +$$ + +Since $\frac{\partial\ln L}{\partial\theta_1} = \frac{\partial\ln L}{\partial\theta_2} = 0$ , the difference between the two equations can be obtained: + +$$ +\sum_ {(i, j, W _ {i j}) \in S ^ {\prime}} (R _ {i} - R _ {j}) \left(W _ {i j} - W _ {i j} ^ {\prime}\right) = \sum_ {(i, j, W _ {i j} ^ {\prime}) \in S ^ {\prime}} (R _ {i} - R _ {j}) \left(P \left(R _ {i}, R _ {j} \mid \theta_ {1}\right) - P \left(R _ {i}, R _ {j} \mid \theta_ {2}\right)\right) +$$ + +According to the Lagrange mean value theorem (Shi-gu, 2014), the following derivation can be derived: + +$$ += \sum_ {(i, j, W _ {i j} ^ {\prime}) \in S ^ {\prime}} (R _ {i} - R _ {j}) ^ {2} P _ {i j} (\xi_ {i j}) (1 - P _ {i j} (\xi_ {i j})) (\theta_ {1} - \theta_ {2}) +$$ + +Due to $P_{ij}(\xi_{ij})(1 - P_{ij}(\xi_{ij})) > 0$ and $\theta_1 < \theta_2$ : + +$$ +\sum_ {(i, j, W _ {i j}) \in S ^ {\prime}} (R _ {i} - R _ {j}) \left(W _ {i j} - W _ {i j} ^ {\prime}\right) < 0 +$$ + +(2) Because of $0 < P(R_{i}, R_{j} | \theta_{k}) < 1$ and $\theta_{k} < 0$ , for each positive sample $(i, j, k, 1)$ of annotator $k$ , we have $\frac{\partial \ln l}{\partial R_{i}} = \theta_{k} (1 - P(R_{i}, R_{j} | \theta_{k})) < 0$ . Similarly, for each negative sample $(i, j, k, 0)$ of annotator $k$ , we have $\frac{\partial \ln l}{\partial R_{i}} = \theta_{k} (0 - P(R_{i}, R_{j} | \theta_{k})) > 0$ . \ No newline at end of file diff --git a/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/images.zip b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8a42710fa248a67b2926975ffa4f50d079e54c06 --- /dev/null +++ b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59e2da615701e0132fbcf42af023db238f34d073537baacb219bcf49024aeb10 +size 498703 diff --git a/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/layout.json b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6e4f5eccbfaf1f40a8d027f10eecddfdfdb09073 --- /dev/null +++ b/ICML/2025/am-ELO_ A Stable Framework for Arena-based LLM Evaluation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:046dfb8f81246f541e32c76cf794d0a0e038bb216f859f0af6f1e92be16e2173 +size 504027 diff --git a/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_content_list.json b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..85ae06ebe4331b10df6aee53251c3b50490746c1 --- /dev/null +++ b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56b75a134546068a62ce0e36969b548874ff39cfd8237a4c639431b820a01e80 +size 117069 diff --git a/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_model.json b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4bede88b6e59abfde431bec7008ebc27b808a7ed --- /dev/null +++ b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f387f235d5ff34fb2718fc92b8f6db0762091071eba735b2ed9d00e962b64ffa +size 140190 diff --git a/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_origin.pdf b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4071de056a67f38158b15a747ff0ba8f497ca31f --- /dev/null +++ b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/aed8248d-b454-44ef-a338-4bc6daba1424_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f804dd9c79b0a743ba4407f49fad5d221171a56f98a61ab9b587b103a674060c +size 521408 diff --git a/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/full.md b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c578f9c424aa70a50844dd06880c9adef19b7a69 --- /dev/null +++ b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/full.md @@ -0,0 +1,483 @@ +Mostafa Elhoushi $^{*1}$ Jeff Johnson $^{*1}$ + +# Abstract + +We present any4, a learned 4-bit weight quantization solution for large language models (LLMs) providing arbitrary numeric representations without requiring pre-processing of weights or activations. any4 yields higher accuracy compared to other related 4-bit numeric representation types: int4, fp4 and nf4, as evaluated on a range of model sizes, generations and families (Llama 2, Llama 3, Mistral and Mixtral). While any4 does not require preprocessing of weights or activations, it is also competitive with orthogonal techniques that require such preprocessing (e.g., AWQ and GPTQ). We also experiment with any3 and any2 and show competitiveness at lower bits. Additionally, we show that we can calibrate using a single curated diverse sample rather than hundreds of samples from a dataset as done in most quantization approaches. We also open source tinygemm, a latency optimized GPU matrix multiplication library for LLMs, that implements any4 using a GPU-efficient lookup table strategy along with other common quantization methods. We open source our code at https://github.com/facebookresearch/any4. + +# 1. Introduction + +Reduced neural network parameter sizes are important for efficient inference, whether at datacenter scale, where accelerators can be provisioned based more upon arithmetic throughput rather than memory requirements, or with edge devices, where smaller, slower memories could be used improving battery lifetime while meeting performance constraints. Given training is typically done in high dynamic range floating point arithmetic, techniques to lossily compress weights must deal with the possibility of varying scale factors and outliers. Various weight numeric formats, such + +![](images/17fefaa940dbe7dd7e03f45a47a6ebffe86ea434f55c5abffcc67f9e6a1b73db.jpg) +Figure 1: Perplexity by quantizing various Llama3 model sizes. Our proposed any4 is the most accurate across numeric formats. + +as 4-bit integer (int4), floating point (fp4), or custom distributions such as NormalFloat4 (nf4) (Dettmers et al., 2023)) along with quantization grouping (Dai et al., 2021) are used to increase accuracy. Pre-processing weights and/or activations (e.g., AWQ (Lin et al., 2024), GPTQ (Frantar et al., 2023), or weight Hadamard transforms (Ashkboos et al., 2024b; Liu et al., 2024) can aid with accuracy as well. In this paper, we present a new learned numeric representation, any4, that does not require online or offline modification of weights or activations. any4 quantization accuracy outperforms other numeric representation types, and is competitive with orthogonal quantization algorithms that preprocess weights and/or activations (orthogonality implying that some of these techniques can be applied together with any4 representation). Accuracy was evaluated on a wide range of model sizes, generations and families. + +# 2. Background + +Trained neural network weights tend to be roughly Gaussian in nature but with heavier tails (Goodfellow et al., 2016). In attempting to lossily compress weights via quantization (yielding fewer reproduction values than the original do + +main), being able to closely match the weight distribution with post-quantization possible reproduction values is important for accuracy. + +# 2.1. Uniform Integer Quantization + +Some of the first neural network quantization works concerned uniform integer quantization (Jacob et al., 2018). Given a set of values to quantize, we obtain the maximum absolute value, and set that to the extreme value (e.g., -128 / +127 for int8 and -8 / +7 for int4 quantization), with zero being preserved (int8/int4 zero dequantizes to original domain zero). Each increment between int8/int4 values corresponds to a fixed increment (scale) in the original floating point domain. + +This allows for more efficient (chip area and power) hardware circuits, as integer multiply-add is much simpler than floating point multiply-add. However, uniform integer quantization is best suited to representing samples from a uniform distribution, which is a mismatch with neural network properties. Increased bitwidth (more dense uniform samples) is needed for accuracy due to the expected distribution mismatch, indicating that there is waste in memory storage. + +# 2.2. Floating Point Quantization + +Floating point quantization (reducing fractional precision and dynamic range via rounding) is another mechanism. Unlike integer quantization, reproduction values are now non-uniformly spaced. Floating point arithmetic is a piecewise linear distribution of values: the steps between floating point exponents are geometric in nature (multiply or divide by 2 each increment), but within a given exponent value, the spacing of reproduction values is linear (as given by the significand bits). This is slightly closer mapping as a Gaussian distribution with zero mean has most of the mass of the distribution at smaller exponent values more densely sampled by floating point than linear distributions on the number line, while within an exponent the spacing of values is still linear. + +Such quantization makes sense with hardware support for reduced bit width floating point types (e.g., fp8 formats with Nvidia's H100 GPU and fp4 with Nvidia's B100 GPU). In lieu of native conversion instructions, bit manipulation can usually convert or round a $n$ -bit fpn value to the nearest standard fp16/bf16 value (thus, fp4 can be emulated on devices with higher bit width floating point support). + +# 2.3. Grouped Quantization + +As the bitwidth (and thus the number of possible quantization reproduction values) decreases, it can be useful to introduce metadata pertaining to groups of values to the quantization to improve accuracy, with metadata storage + +cost amortized across many values (Darvish Rouhani et al., 2020). Grouped quantization is an attempt at this. Instead of forcing a single scalar value itself to be the entire representation, we can define groups of contiguous values along a row or column of the matrix. A common offset and scale factor is defined for a group of values such that the reconstruction error is improved, with typical group sizes in practice being 32 - 256. Other variants include Shared Microexponents (Rouhani et al., 2023), providing a group-wise shared exponent value (multiplicative scale) to adjust per-scalar 4 bit floating point values (MX4) in lieu of a scale and offset. + +# 2.4. Non-Uniform Quantization + +Thus far we have discussed uniform (linear) and floating-point (log/linear) distributions. But we can go further and have quantization reproduction values match the seen distributions more closely. + +NormalFloat4 (nf4) (Dettmers et al., 2023) attempts to do exactly this by having the reproduction values (fixed ahead of time) match a Gaussian distribution exactly. However, with an even number of reproduction values (e.g., $2^{n}$ for $n$ bits), we cannot represent a Gaussian symmetrically if we wish to preserve zero. So nf4 is asymmetric, using one of the 16 values to represent zero. This results in higher accuracy, especially for partially sparse matrices. + +AbnormalFloat4 (af4) (Yoshida, 2023) is a variant of nf4 which adjusts the distribution based on quantization group size. The larger the quantization group, the larger the expected maximum absolute value of Gaussian distribution samples, but the mass of the distribution would still be close to 0. Mapping the nf4 distribution based on the seen absolute maximum value would result in much of the mass of the distribution (values closer to the mean) not being as accurately represented. af4 adjusts the distribution based on group size to take this into account. + +# 2.4.1. ARBITRARY NON-UNIFORM QUANTIZATION: ANY4 + +Instead of trying to match an a priori data distribution as nf4/af4 do, we can instead learn the distribution from the seen data itself. This was explored in signal processing (Lloyd, 1982a; Max, 1960) and any4 explores this for LLMs. For each set of values along each row of a matrix, we can perform k-means (Lloyd, 1982b; MacQueen et al., 1967) or neural network-based clustering, so each row of the matrix has its own 4-bit quantization code, providing indices into a per-row codebook or lookup table (LUT) containing arbitrary floating point dequantization values. This adds little overhead to quantization: for each row of a $\mathbf{M} \times 4096$ matrix, any4 will add 16 bfloat16/float16 values, for an overhead of $(16 \times \text{sizeof}([b]\text{float}16) \times 8\text{bits/byte}) / 4096$ columns $= 0.0625$ bits for each matrix entry. Like existing + +4-bit techniques, for higher accuracy we add quantization groups (e.g., each set of $g$ contiguous row values has a shared 16-bit scale and zero point). Thus, per-scalar quantization group overhead for $g = 128$ in our example would be $((4096 / 128) \times (2 \times 16)) / 4096 = 0.25$ bits, yielding $0.0625 + 0.25 + 4 = 4.3125$ bits for any4 representation. Note that standard int4 grouped quantization is already 4.25 bits/entry here, with extension to any4 only adding 0.0625 bits/entry of LUT overhead. + +In addition, the likely most efficient way to implement nf4 and af4 in software itself is via the same mechanism as any4: using a LUT, as there is no efficient programmatic way to convert a 4-bit integer to an nf4/af4 value using a small number of instructions. To support nf4/af4, our CUDA implementation also allows using a single 16 entry any4 LUT for an entire matrix instead of a LUT per each matrix row. This paper solely evaluates the latter. + +# 2.5. Quantization Process + +Vanilla quantization happens in 2 steps: scaling followed by rounding. + +# 2.5.1. SCALING + +Numeric formats have different numeric ranges, and high precision numeric formats usually have orders of magnitude larger ranges from low precision numeric formats, e.g., fp32 ranges from $-3.4 \times 10^{38}$ to $+3.4 \times 10^{38}$ while int4 ranges from -7 to +8. Moreover, the numeric range of a given tensor could be orders of magnitude different from a low precision format (e.g., most weight values range from -0.01 to +0.01 while int4 ranges from -7 to +8). Hence, directly rounding each element in a tensor to its nearest value in a numeric format will waste most of the bits and lead to high reconstruction error. + +Instead, most approaches scale a tensor, or a subset of a tensor, to the range of lower precision numeric format. Given a weight tensor $\pmb{w}$ , and an index $i$ , the scaled weight tensor, $\pmb{w}_S$ , can be expressed as: + +$$ +w _ {S _ {i}} = \frac {w _ {i} - \beta_ {i}}{\alpha_ {i}} \tag {1} +$$ + +Scale factors $\alpha$ and $\beta$ , are high precision scalar values that are calculated for each group of indices, $G$ . For asymmetric quantization1: + +$$ +\begin{array}{l} \alpha_ {j \in G} = \frac {\operatorname* {m a x} \left(w _ {j \in G}\right) - \operatorname* {m i n} \left(w _ {j \in G}\right)}{Q _ {\text {m a x}} - Q _ {\text {m i n}}} \tag {2} \\ \beta_ {j \in G} = \min (w _ {j \in G}) \\ \end{array} +$$ + +For symmetric quantization: + +$$ +\alpha_ {j \in G} = \frac {\operatorname* {m a x} \left(\operatorname {a b s} \left(w _ {j \in G}\right)\right)}{Q _ {\text {m a x}}} \tag {3} +$$ + +$$ +\beta = 0 +$$ + +where $G$ is a set of indices of a tensor, $\alpha$ and $\beta$ are scaling factors, $Q_{min}$ and $Q_{max}$ are the minimum and maximum values of the lower precision numeric format. + +Scaling could be applied at different granularities: + +- Tensorwise: where $G$ is the set of all indices of the tensor. Hence, all elements in tensor, $\mathbf{w}$ , share the same scale factors: $\alpha_{i,j} = \alpha$ , $\beta_{i,j} = \beta$ , $\forall i,j$ . +- Rowwise: where $G$ is the set of all indices of a row. Elements in each row of a tensor share the same scale factors: $\alpha_{i,j} = \alpha_i, \beta_{i,j} = \beta_i, \forall j$ . +- Columnwise: where $G$ is the set of all indices of a column. Elements in each column of a tensor share the same scale factors: $\alpha_{i,j} = \alpha_j, \beta_{i,j} = \beta_j, \forall i$ . +- Groupwise: where $G$ is the set of non-overlapping consecutive indices along a row (or column), of size $1 \times g$ , where group size, $g$ , is a scalar hyperparameter. Elements in each group, $G_{k}$ , share the same scale factors: $\alpha_{i,j} = \alpha_{i,G_k}, \beta_{i,j} = \beta_{i,G_k}, \forall j$ s.t. $kg \leq j < k(g + 1)$ . Values of 64 or 128 for $g$ usually provide a sweet spot between accuracy and overhead for 4-bit quantization. +- Blockwise: where $G$ is the set of indices within a two-dimensional block of size $b \times b$ , where, $b$ , is a scalar hyperparameter. Elements in each block, $G_{k,l}$ , of a tensor share the same scale factors: $\alpha_{i,j} = \alpha_{G_{k,l}}$ , $\beta_{i,j} = \beta_{G_{k,l}}$ , $\forall i, j$ s.t. $kb \leq i < k(b + 1)$ , $lb \leq j < l(b + 1)$ . + +In our work, we focus on weight-only groupwise quantization (along the reduction dimension) and, unless stated otherwise, use a default group size $g$ of 128. + +# 2.5.2. ROUNDING + +After scaling, the next step is to round the scaled value to the nearest value in the low-precision quantization format: + +$$ +w _ {Q} = \operatorname {r o u n d} _ {Q} (w _ {S}) \tag {4} +$$ + +And to dequantize: $\mathrm{dequant}(w_{Q}) = \alpha w_{Q} + \beta$ . + +# 3. Related Work + +Quantization has long been researched to run on CPUs and custom chips (Xie & Jabri, 1992). Various techniques can be categorized into: + +Weights vs. Activations vs. Gradients vs. Optimizer States Quantization can be applied on weights only (AWQ (Lin et al., 2024), GPTQ (Frantar et al., 2023)), weights and activations (SmoothQuant (Xiao et al., 2023), LLM.int8() (Dettmers et al., 2022a)), KV cache (KVQuant (Hooper et al., 2024)), and can be applied to gradients for training (TinyScript (Fu et al., 2020)) and optimization states (8-bit Optimizers (Dettmers et al., 2022b)). Auto-regressive decoding with batch size 1 and sequence length 1 is a highly memory bound process (a big portion of compute time is spent in loading weights compared to processing activations), thus 4-bit weight only quantization leads to better speedup than 8-bit weight and 8-bit activation quantization (PyTorch, 2024). Moreover, 4-bit weight only quantization leads to a better accuracy-speed tradeoff compared to 4-bit weight and 4-bit activation quantization. In this research, we focus on quantizing weights only. + +Post-Training Quantization (PTQ) vs. Quantization Aware Training (QAT) PTQ refers to quantization on a trained model without the need for further training. QAT refers to quantization during training, whether training a model from scratch, e.g., FP8-LM (Peng et al., 2023), or continually training or finetuning a trained model, e.g., QLoRA (Dettmers et al., 2023). This work falls under PTQ as it does not require further training of a model. + +Numeric Representation While integer quantization is the most commonly used numeric representation, other numeric representations, that have been explained above, are also used for inference and/or training: fp8 (Wang et al., 2018), fp6 (Gernigon et al., 2023), fp4 (Sun et al., 2020), nf4, and af4 (Yoshida, 2023). + +Lookup Table (LUT) Representation While most research quantize to pre-defined numeric formats, other approaches use a dynamic format that is specified for each tensor or subset of elements of a tensor using a look-up-table (LUT) (a.k.a. codebook). In scalar quantization techniques, e.g., DeepCompression for CNNs (Han et al., 2016), GOBO for BERT (Zadeh et al., 2020), SqueezeLLM for LLMs (Kim et al., 2023), LUTs map scalar quantized values to scalar high precision values. In vector quantization techniques (Stock et al. for CNNs (Stock et al., 2020), AQLM for LLMs (Egiazarian et al., 2024)), LUTs map vectors of quantized values to vectors of high precision values. + +Preserving Outlier/Sensitive Values LLM.int8() (Dettmers et al., 2022a) found that keeping $< 0.1\%$ of outlier activations and their corresponding weights in high precision minimizes drop in accuracy. SqueezeLLM (Kim et al., 2023) found that keeping $0.40\%$ outlier weights and an additional $0.05\%$ sensitive weights, determined by a Hessian metric, minimizes accuracy drops. In this work, we quantize all values and keep no outlier/sensitive values in higher precision. + +Pre-processing Weights and/or Activations While many quantization algorithms simply round each high precision value to a value in the quantized set of possible values (Round to Nearest (RTN), stochastic rounding (Xia et al., 2021), or adaptive rounding (Nagel et al., 2020)), other algorithms perform some offline or online processing of weights and/or activations. Instead of keeping outlier activations or sensitive weights, AWQ (Lin et al., 2024) and SmoothQuant (Xiao et al., 2023) mitigate their effects by dividing outlier channels by a scaling factor and compensating by multiplying weights with the same factor. Other quantization approaches mitigate outliers by applying matrix transformations on weights and activations, e.g., QuIP (Chee et al., 2023), QuaRot (Ashkboos et al., 2024a) and Spin-Quant (Liu et al., 2024). Another line of research follows an iterative procedure of quantizing weights in subsets, modifying unquantized elements to mitigate the errors introduced after quantizing each subset, e.g., GPTQ (Frantar et al., 2023). + +A common trend is to use a combination of techniques. QuIP cascades incoherence processing with adaptive rounding, QTIP (Tseng et al., 2024) uses Hadamard transforms to remove outliers, vector quantization for numeric representation and other techniques, while SqueezeLLM preserves a portion of outlier/sensitive values in high precision and applies scalar quantization. In this work, we opt for a one-shot quantization algorithm that does not require any online or offline pre-processing or transformations on weights and/or activations, and focus on the aspect of learning quantization from data with efficient inference in hardware, achieving SOTA accuracies compared to other numeric format approaches and is competitive with orthogonal approaches that pre-process weights and activations. We leave it to future work to combine any4 with such orthogonal techniques. + +# 4. Proposed Solution + +# 4.1. any4 Algorithm + +In any4 quantization, we first apply group-wise scaling, then try to find the optimal numeric representation for each row of a weight matrix. Naively applying K-means clustering on scaled weights will lead to a sub-optimal quantization scheme. This is because K-means clustering will minimize the reconstruction error of the weight matrix rather than the output of multiplying weights with sample inputs, and even for weight reconstruction, K-means clustering will minimize the reconstruction error of the scaled weight matrix rather than the original weight matrix. + +We denote a weight matrix with dimensions of $N\times K$ as $\pmb{w}$ , an input vector with dimensions $M\times K$ , where $M = 1$ without loss of generality, as $\pmb{x}$ , and the output vector with dimensions $M\times N$ as $\pmb{y}$ . Matrix multiplication in high + +precision can be expressed as: + +$$ +\boldsymbol {y} = \boldsymbol {w x} \tag {5} +$$ + +and matrix multiplication with quantized weights as: + +$$ +\hat {\boldsymbol {y}} = \operatorname {d e q u a n t} \left(\boldsymbol {w} _ {Q}\right) \boldsymbol {x} \tag {6} +$$ + +For the $i$ th element of output $\pmb{y}$ , this is equivalent to: + +$$ +y _ {i} = \sum_ {\forall j} w _ {i, j} x _ {j} \tag {7} +$$ + +$$ +\hat {y} _ {i} = \sum_ {\forall j} \operatorname {d e q u a n t} \left(w _ {Q _ {i, j}}\right) x _ {j} \tag {8} +$$ + +Our goal is to find the set of $2^{n}$ quantized values for row $i$ : + +$$ +Q _ {i} = \left\{w _ {Q _ {i} ^ {0}}, w _ {Q _ {i} ^ {1}}, \dots , w _ {Q _ {i} ^ {2 n - 1}} \right\} \tag {9} +$$ + +for $n$ -bit quantization (anyn) that will minimize the expected mean square error in output activations for possible input activations: + +$$ +\min _ {Q _ {i}} \mathbb {E} \| \hat {\boldsymbol {y}} - \boldsymbol {y} \| \tag {10} +$$ + +We choose a greedy approach to minimize the mean of Frobenius norm of the error of the output activation vector by minimizing the absolute error of each of its elements: + +$$ +\begin{array}{l} \min _ {Q _ {i}} \mathbb {E} | \hat {y} _ {i} - y _ {i} | = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} w _ {i, j} x _ {j} - \sum_ {\forall j} \operatorname {d e q u a n t} \left(w _ {Q _ {i, j}}\right) x _ {j} \right| \\ = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} \left(w _ {i, j} - \operatorname {d e q u a n t} \left(w _ {Q _ {i, j}}\right)\right) x _ {j} \right| \tag {11} \\ \end{array} +$$ + +This way, we can focus on dealing with finding the optimal quantization configuration for each row $i$ of the weight matrix. (Note that GPTQ opts to minimize output activations error in a different way such that all rows of the weight matrix are co-optimized together). Expanding the right hand side of the equation: + +$$ +\min _ {Q _ {i}} \mathbb {E} | \hat {y} _ {i} - y _ {i} | = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} \left(w _ {i, j} - \left(\alpha_ {i, j} w _ {Q _ {i, j}} + \beta_ {i, j}\right)\right) x _ {j} \right| \tag {12} +$$ + +The high precision weights are mathematically equivalent to applying scaling factors on scaled weights (i.e., re-arrange Eqn. 1 to expand $w_{i,j}$ into $w_{i,j} = \alpha_{i,j}w_{S_{i,j}} + \beta_{i,j}$ ): + +$$ +\begin{array}{l} \min _ {Q _ {i}} \mathbb {E} | \hat {y} _ {i} - y _ {i} | \\ = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} \left(\alpha_ {i, j} w _ {S _ {i, j}} + \beta_ {i, j} - \left(\alpha_ {i, j} w _ {Q _ {i, j}} + \beta_ {i, j}\right)\right) x _ {j} \right| \\ = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} \left(\alpha_ {i, j} \left(w _ {S _ {i, j}} - w _ {Q _ {i, j}}\right) x _ {j} \right. \right| \tag {13} \\ \end{array} +$$ + +The offset factors, $\beta_{i,j}$ , cancel each other out. Hence, we have: + +$$ +\min _ {Q _ {i}} \mathbb {E} | \hat {y} _ {i} - y _ {i} | = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} \left(\alpha_ {i, j} w _ {S _ {i, j}} x _ {j} - \alpha_ {i, j} w _ {Q _ {i, j}} x _ {j}\right) \right| \tag {14} +$$ + +We now proceed to solve this by a K-Means-style alternating optimization procedure: + +0. Initialize: for $i$ th row of a weight matrix, randomly initialize a set $Q_{i}$ to a random set of $2^{n}$ values: + +$$ +Q _ {i} = \left\{w _ {Q _ {i} ^ {0}}, w _ {Q _ {i} ^ {1}}, \dots , w _ {Q _ {i} ^ {2 ^ {n} - 1}} \right\} \tag {15} +$$ + +1. E-Step: Given $Q_{i}$ and the row of scaled weights: + +$$ +\{w _ {S _ {i, j}} \} _ {\forall j} = \left\{w _ {S _ {i, 0}}, w _ {S _ {i, 1}}, \dots , w _ {S _ {i, M - 1}} \right\} \tag {16} +$$ + +we would like to deduce the best $w_{Q_{i,j}}$ for each corresponding $w_{S_{i,j}}$ that will minimize the expression defined in Eq. 14. Since in this step, the possible values in $Q_{i}$ are fixed and we are merely selecting from a set of discrete values, we apply a local minimization step and re-write Eq. 14 to: + +$$ +\begin{array}{l} w _ {Q _ {i, j}} = \min _ {w _ {Q _ {i, j}} \in Q _ {i}} \left(\alpha_ {i, j} w _ {S _ {i, j}} x _ {j} - \alpha_ {i, j} w _ {Q _ {i, j}} x _ {j}\right) ^ {2} \\ = \alpha_ {i, j} x _ {j} \min _ {w _ {Q _ {i, j}} \in Q _ {i}} \left(w _ {S _ {i, j}} - w _ {Q _ {i, j}}\right) ^ {2} \tag {17} \\ \end{array} +$$ + +Again since $\alpha_{i,j}x_{j}$ are fixed in this step and are independent of $w_{Q_{i,j}}$ , we can drop that term: + +$$ +w _ {Q _ {i, j}} = \min _ {w _ {Q _ {i, j}} \in Q _ {i}} \left(w _ {S _ {i, j}} - w _ {Q _ {i, j}}\right) ^ {2} \tag {18} +$$ + +2. M-Step: After applying the E-Step above, each $w_{Q_{i,j}}$ will be set to one of the $2^n$ values in the set $Q_i$ . We refer to each set of indices $i, j$ that are associated with a specific quantized value $Q_i^q$ as a cluster. We can rewrite Eq. 14 to create a separate sum term for elements in each cluster: + +$$ +\begin{array}{l} \min _ {Q _ {i}} \mathbb {E} \left| \hat {y} _ {i} - y _ {i} \right| \\ = \min _ {Q _ {i}} \mathbb {E} \left| \sum_ {\forall j} \sum_ {\forall q \in Q _ {i} ^ {q}} \left(\alpha_ {i, j} w _ {S _ {i, j}} x _ {j} - \alpha_ {i, j} w _ {Q _ {i} ^ {q}} x _ {j}\right) \right| \tag {19} \\ \end{array} +$$ + +To minimize the term, we can aim to set the difference for elements for each cluster to 0: + +$$ +\mathbb {E} \left| \sum_ {\forall q \in Q _ {i} ^ {q}} \left(\alpha_ {i, j} w _ {S _ {i, j}} x _ {j} - \alpha_ {i, j} w _ {Q _ {i} ^ {q}} x _ {j}\right) \right| = 0 \tag {20} +$$ + +The expression inside the expectation operation is a scalar value. Moreover, except for input activations $x$ , all the other variables are deterministic and known offline. Hence, the expectation operator is only needed to be applied on input activations: + +$$ +\sum_ {\forall q \in Q _ {i} ^ {q}} \left(\alpha_ {i, j} w _ {S _ {i, j}} \mathbb {E} | x _ {j} | - \alpha_ {i, j} w _ {Q _ {i} ^ {q}} \mathbb {E} | x _ {j} |\right) = 0 \tag {21} +$$ + +Re-writing: + +$$ +\begin{array}{l} \sum_ {\forall q \in Q _ {i} ^ {q}} \alpha_ {i, j} w _ {S _ {i, j}} \mathbb {E} | x _ {j} | = \sum_ {\forall q \in Q _ {i} ^ {q}} \alpha_ {i, j} w _ {Q _ {i} ^ {q}} \mathbb {E} | x _ {j} | (22) \\ = w _ {Q _ {i} ^ {q}} \sum_ {\forall q \in Q _ {i} ^ {q}} \alpha_ {i, j} \mathbb {E} | x _ {j} | (22) \\ \end{array} +$$ + +Re-arranging: + +$$ +w _ {Q _ {i} ^ {q}} = \frac {\sum_ {\forall q \in Q _ {i} ^ {q}} \alpha_ {i , j} w _ {S _ {i , j}} \mathbb {E} \left| x _ {j} \right|}{\sum_ {\forall q \in Q _ {i} ^ {q}} \alpha_ {i , j} \mathbb {E} \left| x _ {j} \right|} \tag {23} +$$ + +Eqn. 23 states that the optimal value to represent a group of scaled weights within a cluster is their average weighted by the product of the scaling factor of a weight element and mean of the norm of activations applied to that element. + +We alternate between the E-Step and M-Step till the values of $Q_{i}$ converge. + +The equation of E-Step is equivalent to the cluster assignment step of K-means clustering, while the equation of M-Step is equivalent to the centroid update step of weighted K-means. Hence, our mathematical formulations guide us to creating the LUT of each row of a scaled weight matrix by the algorithm depicted in Alg. 1. We also summarize our algorithm in Fig. 2. We speedup the process by parallelizing the loop over each linear weight's rows, enabling us to quantize Llama3 8B in 10 minutes. + +While most quantization papers use a dataset like C4 to obtain a set of calibration activations, we hand curate a single calibration sample, as shown in Listing. 1, that covers diverse set of topics, and then obtain the mean of absolute of activations along the channel axis to represent $\mathbb{E}|x|$ + +- Fiction: "Once upon a time, a girl named Alice was living alone on an island. One day, she met a wizard ..." +- News: "The United Nations held its General Assembly meeting this year amid multiple world crises and wars. In his speech, the General Secretary called for ... +- Code: ~public static void main(String[] args) +\~ System.out.println("Hello world!"); +- Math: $(5.2 + 2.7) / 0.6 - 1.9 * 2.2 =$ +- Facts: "The capital of Egypt is Cairo. It is the largest city in the region and is home to... + +![](images/63e274c71beb9067655811e77eb36b7d72d2a85bf40df3b24b8d3810c938dadd.jpg) +Listing 1: Calibration sample used to generate LUTs. + +![](images/bd796224bdc0787d72e79398f5bc862df7c9ce8679b7e0eeb1e7f6b928aa6e88.jpg) + +![](images/4382acf45fada2f7da105a39529239b22ae85f60842909cc0a481be85d51fe58.jpg) +Figure 2: any4 quantization process + +# 4.2. tinygemm Library + +As part of this paper, we present tinygemm, a GEMM library optimized for low-latency LLM inference at small batch sizes (1 to 16) for Nvidia GPU Ampere generation and later architectures. For a matrix multiplication $\pmb{y} = \pmb{x}\pmb{w}^T$ where $\pmb{x}$ is of size $M\times K$ and $\pmb{w}$ is of size $N\times K$ ( $M$ and $N$ being the outer dimensions and $K$ being the reduction dimension), in linear layers, the product of batch size + +
Llama3.2 1B
Perplexity ↓Tasks ↑
WikiText-2C4PTBCodeParrotHumanEval Pass@1MBPP Pass@1MMLUHellaSwagGSM8KBBH
FP169.7612.7716.563.4916.46%21.4%36.1%47.7%6.60%31.1%
INT411.8915.7420.324.089.76%11.4%30.1%44.7%3.18%26.2%
FP413.0117.1121.894.288.54%5.8%29.3%43.6%2.27%23.3%
NF410.9914.6318.783.8213.4%13.8%33.3%45.8%2.65%26.8%
ANY410.6313.9517.943.7111.0%18.6%32.9%46.7%3.71%29.0%
Llama3 8B
FP166.148.9310.592.5429.3%41.4%62.0%60.1%50.7%62.8%
INT46.879.8911.372.8323.2%35.4%59.6%58.6%40.6%58.5%
FP47.1010.2211.812.8922.0%36.8%57.1%58.5%35.0%53.2%
NF46.639.5211.142.7223.2%39.2%60.7%59.1%41.1%59.0%
ANY46.519.4011.072.6821.3%39.2%61.0%59.5%41.7%59.2%
Llama3 70B
FP162.866.778.161.9117.7%60.8%75.4%66.3%80.6%82.4%
INT43.637.978.862.2118.3%45.0%73.0%66.2%73.9%78.4%
FP43.947.768.992.1722.0%50.8%71.9%65.6%75.3%77.9%
NF43.437.678.842.1518.9%39.6%73.7%66.1%75.9%79.3%
ANY43.207.018.331.9917.1%57.4%75.1%66.1%78.5%81.8%
+ +Table 1: Quantizing Llama3 models with various numeric formats. Results for Llama2 and Mistral/Mixtral are in the Appendix. + +and sequence length corresponds to matrix dimension $M$ . At $M \leq 8$ , activation $x$ is itself much smaller than tensor core tile sizes ( $m = 16, n = 8, k = 16$ ) for 16-bit float Ampere+ mma "tensor core" fixed-function matrix multiplication instructions. In this case, each $8 \times 16$ tile of $w$ (weights) is only used once (no data reuse). Thus, multistage asynchronous pipelining and data reuse concerns in typical high-performance GPU GEMM kernels are reduced, as the problem is largely memory latency (or bandwidth) limited. Tensor cores still outperform manual (scalar) matrix multiplication at $M = 1$ (GEMV / matrix-vector multiplication) per our analysis. An early version of tinygemm, largely focused on int4 grouped quantization for small batch sizes, has been part of core PyTorch since late 2023, subsequently utilized by gpt-fast (PyTorch, 2023), torchao (PyTorch, 2024), and Hugging Face Transformers (Wolf et al., 2020). + +Many inference works (especially in open source) concentrate on $M = 1$ performance, where latency is a concern. Even in this case, where we would be using only $\frac{1}{8}$ or $\frac{1}{16}$ of tensor core throughput, we improve latency by laying out matrices in main (global) memory in the exact format that mma expects per tile rather than standard row-major / column-major format. Typical tensor core GEMM kernels use shared memory (a small, high-speed user-controllable scratchpad memory) to transpose tiles of matrices into the desired format before multiplication can proceed. We avoid this by performing the transposition in advance, allowing matrix data to pass directly from global memory to registers. + +As there is little to no weight reuse opportunity for small batch sizes, and loads into registers can be asynchronous as they generally do not stall execution until the point of first use, tinygemm does not use shared memory in many cases. This strategy improves performance at small batch sizes, but is not applicable for larger sizes. To improve efficiency, when $M \leq 8$ , we maintain weights on the left to use the $16 \times 16$ tile, computing $\mathbf{y} = (\mathbf{w}\mathbf{x}^T)^T$ flipping the order of matrices presented to mma with transpositions performed on the fly, and if $M > 8$ , we maintain weights on the right for the $8 \times 16$ tile $(\mathbf{y} = \mathbf{x}\mathbf{w}^T)$ . + +To implement int4, nf4, or any4 GEMM, we dequantize weights on the fly before mma multiplication. Speed is improved by always ensuring that we can load matrix data using vectorized 16 byte loads in coalesced and contiguous fashion across the warp from global memory. In cases where a single thread's quantized tile data is less than 16 bytes (a m16n8k16 "B" tensor core layout with quantized 4-bit values only needs 2 bytes loaded prior to dequantization per CUDA thread per mma), multiple tiles along the reduction dimension (" $k$ -tiles" in tinygemm terminology) can be packed together to ensure that wide data loads can be used in all cases. + +Instead of typical int4-to-float dequantization (converting an integer in $[-8, 7]$ to floating point via native instructions or bit manipulation), we can use a 16-entry LUT per row containing arbitrary floating point values. In tinygemm, this LUT is held in a single register with lookup provided using + +
Llama3 8B
Quantization AlgorithmNumeric FormatWikiText-2 Perplexity ↓Numeric FormatWikiText-2 Perplexity ↓Numeric Format WikiText-2 Perplexity ↓
FP166.1
4-bitsRTNINT46.93-bitsINT317.12-bitsINT2
GPTQINT46.5INT38.2INT2
AWQINT46.6INT38.2INT2
QuIPINT46.5INT37.5INT2
RTNANY46.5ANY38.0ANY2
+ +
Llama3 70B
FP162.9
4-bitsRTNINT43.63-bitsINT311.82-bitsINT24.6E5
GPTQINT43.3INT35.2INT211.9
AWQINT43.3INT34.8INT21.7E6
QuIPINT43.4INT34.7INT213.0
RTNANY43.2ANY34.6ANY2253.8
+ +Table 2: Quantizing Llama3 models with various quantization algorithms for different bit widths. + +GPU warp shuffle functionality, with the 4-bit quantization codes used as LUT indices. An alternative strategy would be to use a shared memory LUT containing all possible $16 \times 16 = 256$ pairs of any4 reproduction values so that two packed any4 values (in a byte) can be dequantized per lookup. While this amount of shared memory usage will likely not affect performance (via occupancy) that much, it does suffer shared memory bank conflict penalties in many circumstances. + +# 5. Results + +We quantize weights of all linear modules of all transformer layers: key, query, value, and output projections, up, down projections and gate for feed-forward networks (FFN). Following most quantization papers, we keep weights of embedding and final classification layers high-precision. + +We evaluate both perplexity and downstream tasks. For perplexity, we ported the implementation of GPTQ for WikiText-2 (Merit et al., 2017), C4 (Raffel et al., 2019), and Penn Treebank (Marcus et al., 1993) that is used by codebases of other quantization papers. To add coding domain, we added perplexity on CodeParrot (CodeParrot). + +For downstream tasks, we used Eleuther Harness (Gao et al., 2024) for natural language tasks, and BigCode Harness (Ben Allal et al., 2022) for coding tasks. Accuracies on downstream tasks tend to be noisy (Wang et al., 2024), while perplexity is a less noisy indicator of a model's performance. + +Comparison with Other Numeric Representations We first compare accuracy of any4 with other numeric formats: int4, fp4, nf4. We use group-wise scaling with group size 128, and asymmetric scaling for all models, except for Llama3 70B where we found symmetric scaling leads to + +better results. + +We ran on different model families (Llama (Touvron et al., 2023a) and Mistral (Jiang et al., 2023)), different generations (Llama2 (Touvron et al., 2023b) and Llama3 (Grattafori et al., 2024)), and different sizes (from 1B all the way to 70B). We provide results of Llama3 in Table 1, Llama2 in Table A1, and Mistral in Table A2. Our results show any4 has the best accuracies across all models. + +Speed Comparisons We benchmark matrix multiplication of vector activation and square weight tensors from 1K to 16K on A100 80GB GPU using PyTorch 2.3.0 and provide the speedups of our tinygemm library in Fig. 3. int4, nf4, and any4 were implemented using our tinygemm library. int4 kernels have the highest speedup, reaching close to $3 \times$ . nf4 and any4 speedups reach up to $2 \times$ ; lower than int4 because of the overhead of looking up the LUTs. Nevertheless, any4 has almost the same speedup as nf4, despite the latter requiring a single LUT for a whole tensor and the former requiring a separate LUT for each row in the weight matrix. + +Comparison with Orthogonal Quantization Techniques As explained in the Related Works section, our work proposes a new numeric representation applying RTN (round-to-nearest). Despite our work being orthogonal to others that transforms weights and/or activations to make them more rounding or quantization friendly, we compare any4 to GPTQ, AWQ, and QuIP that use int4 in Table 2. Results of AWQ, GPTQ, and QuIP are obtained from (Huang et al., 2024). In 4-bit the results show that any4 has either the best or competitive performance. For future work, we can evaluate these orthogonal techniques together, replacing the int4 representation with any4. + +3-bit and 2-bit Quantization Although our main goal was 4-bit representation, we ran experiments to see how any3 and + +
Llama3.2 1B
Calibration DataNumber of SamplesSequence Length per SampleWikiText-2Perplexity ↓
C4PTB
FP169.7612.7716.56
ANY4WikiText-2128204810.7014.0818.02
ANY4Pile128204810.7013.9918.26
ANY4C4128409610.7414.1418.10
ANY4C4128204810.6714.0517.97
ANY4C412851210.6213.9618.03
ANY4Handwritten Prompt1-10.6313.9517.94
+ +![](images/af5b4ca1d8f17fa11775935ee0ae739793049b3e5013195cf383d11234aa2340.jpg) +Figure 3: Speedup of our tinygemm CUDA kernels on 80GB A100 on matrix multiplication of $1 \times K$ input by $K \times K$ weight, w.r.t PyTorch's bfloat16 implementation. + +any2 perform compared to the prior orthogonal quantization techniques (Table 2). For 3-bit, any3 is either the best or competitive with other approaches. For 2-bit, QuIP is the best, while any2 is better than AWQ and competitive with GPTQ. + +# 5.1. Ablation Studies + +# Calibration Data + +In Table 3 we ablate with different calibration datasets to calculate sample weighting in Eqn. 23 of our any4 algorithm. The results show that our proposed handwritten sample performs better than commonly used datasets in literature, despite being significantly smaller in number of tokens. Note that the handwritten sample or prompt has a fixed number of words that translates to different numbers of tokens depending on the tokenizer that changes with different models. Our prompt has 88 words only, which will in worst case translate to a few hundred tokens. These results may indicate that a single data sample with diverse topics could be enough or better to calibrate than using many long sample sequences. Our evaluation sequence length is 2048 (following Lin et al., 2024; Frantar et al., 2023)), calibration is on training split of each dataset, and evaluation is on the validation or + +test split. + +Group Size In Table 4 we ablate quantization group size from 64 to 1024. any4 always has the lowest perplexity across other 4-bit representations across all group sizes. It is noteworthy that fp4 and nf4 perplexity degenerates for large group sizes at 1024, while any4 only increases marginally. + +Table 3: any4 quantization with different calibration data. + +
Llama3.2 1B
Group Size
641282565121024
FP1612.77
FP416.1917.1118.1220.432.3E6
NF414.2714.6314.9815.387.8E5
ANY413.7513.9514.0914.2414.34
+ +Table 4: C4 perplexity after quantizing with different group sizes. + +# 6. Conclusion & Future Work + +We have presented any4, an algorithm to find an optimal low-bit numeric representation for each row in a weight matrix, as well as tinygamm, a matrix multiplication library for low-latency, low-bit inference. We have shown that accuracy of any4 is superior to other 4-bit numeric formats with low memory overhead, and competitive with various orthogonal quantization techniques that involve further preprocessing. We would like to explore combining with these orthogonal techniques in the future. + +# Acknowledgements + +We would like to thank Newsha Ardalani for help in running experiments; Daniel Haziza, Francisco Massa, Luca Wehrstedt, Bram Wasti, Steven Li, and Lin Xiao for discussions. + +# Impact Statement + +This paper presents a work that quantizes pretrained models. The input to the algorithm is a model's pretrained weights, + +architecture, and a calibration dataset (which in our case was a single hand-written prompt). We have not evaluated if the quantization algorithm increases or decreases any societal impact of the underlying model. One factor that may introduce bias into the model is the calibration dataset. We leave it for future work to analyze the effect of different calibration datasets (or prompts in our case) on bias and truthfulness. + +# References + +Arthur, D. and Vassilvitskii, S. k-means++: the advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '07, pp. 1027-1035, USA, 2007. Society for Industrial and Applied Mathematics. ISBN 9780898716245. +Ashkboos, S., Mohtashami, A., Croci, M. L., Li, B., Cameron, P., Jaggi, M., Alistarh, D., Hoefler, T., and Hensman, J. Quarot: Outlier-free 4-bit inference in rotated LLMs. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024a. URL https://openreview.net/forum?id=dfqsW38v1X. +Ashkboos, S., Mohtashami, A., Croci, M. L., Li, B., Cameron, P., Jaggi, M., Alistarh, D., Hoefer, T., and Hensman, J. Quarot: Outlier-free 4-bit inference in rotated llms, 2024b. URL https://arxiv.org/abs/2404.00456. +Ben Allal, L., Muennighoff, N., Kumar Umapathi, L., Lipkin, B., and von Werra, L. A framework for the evaluation of code generation models. https://github.com/bigcode-project/bigcode-evaluation-harness, 2022. +Chee, J., Cai, Y., Kuleshov, V., and Sa, C. D. QuIP: 2-bit quantization of large language models with guarantees. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=xrk9g5vcXR. +CodeParrot. Codeparrot/codeparrot-clean. URL https: //huggingface.co/datasets/codeparrot /codeparrot-clean. +Dai, S., Venkatesan, R., Ren, M., Zimmer, B., Dally, W., and Khailany, B. Vs-quant: Per-vector scaled quantization for accurate low-precision neural network inference. In Smola, A., Dimakis, A., and Stoica, I. (eds.), Proceedings of Machine Learning and Systems, volume 3, pp. 873-884, 2021. URL https://proceedings.mlsys.org/paper_files/paper/2021/file/48a6431f04545e11919887748ec5cb52-Paper.pdf. + +Darvish Rouhani, B., Lo, D., Zhao, R., Liu, M., Fowers, J., Ovtcharov, K., Vinogradsky, A., Massengill, S., Yang, L., Bittner, R., Forin, A., Zhu, H., Na, T., Patel, P., Che, S., Chand Koppaka, L., SONG, X., Som, S., Das, K., T, S., Reinhardt, S., Lanka, S., Chung, E., and Burger, D. Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 10271-10281. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/747e32ab0fea7fbd2ad9ec03daa3f840-Paper.pdf. +Dettmers, T., Lewis, M., Belkada, Y., and Zettlemoyer, L. Gpt3.int8(): 8-bit matrix multiplication for transformers at scale. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 30318-30332. Curran Associates, Inc., 2022a. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/c3ba4962c05c49636d4c6206a97e9c8a-Paper-Conference.pdf. +Dettmers, T., Lewis, M., Shleifer, S., and Zettlemoyer, L. 8-bit optimizers via block-wise quantization. In International Conference on Learning Representations, 2022b. URL https://openreview.net/forum?id=shpkpVXzo3h. +Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L. Qlora: Efficient finetuning of quantized llms. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 10088-10115. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/1feb87871436031bdc0f2beaa62a049b-Paper-Conference.pdf. +Egiazarian, V., Panferov, A., Kuznedev, D., Frantar, E., Babenko, A., and Alistarh, D. Extreme compression of large language models via additive quantization. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=5mCaITRTmo. +Frantar, E., Ashkboos, S., Hoefler, T., and Alistarh, D. OPTQ: Accurate quantization for generative pre-trained transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id= tcbBPnfwxsS. +Fu, F., Hu, Y., He, Y., Jiang, J., Shao, Y., Zhang, C., and Cui, B. Don't waste your bits! Squeeze activations and gradients for deep neural networks via TinyScript. In + +III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 3304-3314. PMLR, 13-18 Jul 2020. URL https://proceedings.mlr.press/v119/fu20c.html. +Gao, L., Tow, J., Abbasi, B., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu, J., Le Noac'h, A., Li, H., McDonell, K., Muennighoff, N., Ociepa, C., Phang, J., Reynolds, L., Schoelkopf, H., Skowron, A., Sutawika, L., Tang, E., Thite, A., Wang, B., Wang, K., and Zou, A. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/1 2608602. +Gernigon, C., Filip, S.-I., Sentieys, O., Coggiola, C., and Bruno, M. Low-precision floating-point for efficient onboard deep neural network processing, 2023. URL https://arxiv.org/abs/2311.11172. +Goodfellow, I., Bengio, Y., and Courville, A. Deep Learning. MIT Press, 2016. URL http://www.deeplearningbook.org. Book in preparation for MIT Press. +Grattaftori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A., Hinsvark, A., Rao, A., Zhang, A., Rodriguez, A., Gregerson, A., Spataru, A., Roziere, B., Biron, B., Tang, B., Chern, B., Caucheteux, C., Nayak, C., Bi, C., Marra, C., McConnell, C., Keller, C., Touret, C., Wu, C., Wong, C., Ferrer, C. C., Nikolaidis, C., Allonsius, D., Song, D., Pintz, D., Livshits, D., Wyatt, D., Esiobu, D., Choudhary, D., Mahajan, D., Garcia-Olano, D., Perino, D., Hupkes, D., Lakomkin, E., AlBadawy, E., Lobanova, E., Dinan, E., Smith, E. M., Radenovic, F., Guzmán, F., Zhang, F., Synnaeve, G., Lee, G., Anderson, G. L., Thattai, G., Nail, G., Mialon, G., Pang, G., Cucurell, G., Nguyen, H., Korevaar, H., Xu, H., Touvron, H., Zarov, I., Ibarra, I. A., Kloumann, I., Misra, I., Evtimov, I., Zhang, J., Copet, J., Lee, J., Geffert, J., Vranes, J., Park, J., Mahadeokar, J., Shah, J., van der Linde, J., Billock, J., Hong, J., Lee, J., Fu, J., Chi, J., Huang, J., Liu, J., Wang, J., Yu, J., Bitton, J., Spisak, J., Park, J., Rocca, J., Johnstun, J., Saxe, J., Jia, J., Alwala, K. V., Prasad, K., Upasani, K., Plawiak, K., Li, K., Heafield, K., Stone, K., El-Arini, K., Iyer, K., Malik K. Chiu K. Bhalla K. Lakhotia K. Rantala-Yeary L. van der Maaten L. Chen L. Tan L. Jenkins L. Martin L. Madaan L. Malo L. Blecher L. Landzaat L. de Oliveira L. Muzzi M. Pasupuleti M. Singh M. Paluri M. Kardas M. Tsimpoukelli M. Oldham M. Rita M. Pavlova M. Kambadur M. Lewis M. Si M. Singh M. K. Hassan M. Goyal N. Torabi N. Bashlykov N. Bogoychev N. Chatterji N. Zhang N. Duchenne O. Celebi O. Alrassy P. Zhang P. Li P + +Vasic, P., Weng, P., Bhargava, P., Dubal, P., Krishnan, P., Koura, P. S., Xu, P., He, Q., Dong, Q., Srinivasan, R., Ganapathy, R., Calderer, R., Cabral, R. S., Stojnic, R., Raileanu, R., Maheswari, R., Girdhar, R., Patel, R., Sauvestre, R., Polidoro, R., Sumbaly, R., Taylor, R., Silva, R., Hou, R., Wang, R., Hosseini, S., Chennabasappa, S., Singh, S., Bell, S., Kim, S. S., Edunov, S., Nie, S., Narang, S., Raparthy, S., Shen, S., Wan, S., Bhosale, S., Zhang, S., Vandenhende, S., Batra, S., Whitman, S., Sootla, S., Collot, S., Gururangan, S., Borodinsky, S., Herman, T., Fowler, T., Sheasha, T., Georgiou, T., Scialom, T., Speckbacher, T., Mihaylov, T., Xiao, T., Karn, U., Goswami, V., Gupta, V., Ramanathan, V., Kerkez, V., Gonguet, V., Do, V., Vogeti, V., Albiero, V., Petrovic, V., Chu, W., Xiong, W., Fu, W., Meers, W., Martinet, X., Wang, X., Wang, X., Tan, X. E., Xia, X., Xie, X., Jia, X., Wang, X., Goldschlag, Y., Gaur, Y., Babaei, Y., Wen, Y., Song, Y., Zhang, Y., Li, Y., Mao, Y., Coudert, Z. D., Yan, Z., Chen, Z., Papakipos, Z., Singh, A., Srivastava, A., Jain, A., Kelsey, A., Shajnfeld, A., Gangidi, A., Victoria, A., Goldstand, A., Menon, A., Sharma, A., Boesenberg, A., Baevski, A., Feinstein, A., Kallet, A., Sangani, A., Teo, A., Yunus, A., Lupu, A., Alvarado, A., Caples, A., Gu, A., Ho, A., Poulton, A., Ryan, A., Ramchandani, A., Dong, A., Franco, A., Goyal, A., Saraf, A., Chowdhury, A., Gabriel, A., Bharambe, A., Eisenman, A., Yazdan, A., James, B. Maurer, B. Leonhardi, B. Huang B. Loyd B. Paola B.D. ParanjapeB.LiuB.WuB.NiB.Hancock B. Wasti B. Spence B. Stojkovic B.Gamido B. Montalvo B. Parker C. Burton C.Mejia C. Liu C. WangC.KimC.ZhouC.HuC.-H.CaiC. Tindal C. Feichtenhofer C. Gao C.Civin D.Beaty D.KreymerD.LiD.AdkinsD.XuD.Testuggine D. David D.Parikh D. Liskovich D.Foss D.Wang D.Le,D.Holland D.DowlingE.Jamil E.Montgomery,E.PresaniE.HahnE.WoodE.LeE.-T. Brinkman E. Arcaute E.Dunbar E.Smothers E.Sun F.Kreuk F.Tian F.Kokkinos F.Ozgenel F.Caggioni F. Kanayet F.Seide F.FlorezG.M.Schwarz G.Badeer G.Swee G.Halpern G.Herman G.Sizov G.Guangyi Zhang Lakshminarayanan G.Inan H. Shojanazeri H.Zou H.Wang H.Zha H.Habeeb H. Rudolph H. Suk H.Aspegren H.Goldman H.Zhan H.Damlaj I.Molybog I.Tufanov I.Leontiadis I. Veliche I.-E.Gat I.Weissman J.Geboski J.Kohli J.Lam J.Asher J.Gaya J-B.Marcus J.TangJ Chan J.Zhen J.Reizenstein J.Teboul J.Zhong J Jin J.Yang J.Cummings J.Carvill J.Shepard J McPhie J.Torres J.Ginsburg J.Wang J.WuK.U K.H.Saxena K.Khandelwal K.Zand K.Matosich K.Veeraraghavan K.Michelena K.Li K.Jagadeesh K.Huang K.Chawla K.Huang K.Chen L.Garg L.A.L Silva L.Bell L.Zhang L.Guo L.Yu L. Moshkovich L.Wehrstedt L.Khabsa M.Avalani M + +Bhatt, M., Mankus, M., Hasson, M., Lennie, M., Reso, M., Groshev, M., Naumov, M., Lathi, M., Keneally, M., Liu, M., Seltzer, M. L., Valko, M., Restrepo, M., Patel, M., Vyatskov, M., Samvelyan, M., Clark, M., Macey, M., Wang, M., Hermoso, M. J., Metanat, M., Rastegari, M., Bansal, M., Santhanam, N., Parks, N., White, N., Bawa, N., Singhal, N., Egebo, N., Usunier, N., Mehta, N., Laptev, N. P., Dong, N., Cheng, N., Chernoguz, O., Hart, O., Salpekar, O., Kalinli, O., Kent, P., Parekh, P., Saab, P., Balaji, P., Rittner, P., Bontrager, P., Roux, P., Dollar, P., Zvyagina, P., Ratanchandani, P., Yuvraj, P., Liang, Q., Alao, R., Rodriguez, R., Ayub, R., Murthy, R., Nayani, R., Mitra, R., Parthasarathy, R., Li, R., Hogan, R., Battey, R., Wang, R., Howes, R., Rinott, R., Mehta, S., Siby, S., Bondu, S. J., Datta, S., Chugh, S., Hunt, S., Dhillon, S., Sidorov, S., Pan, S., Mahajan, S., Verma, S., Yamamoto, S., Ramaswamy, S., Lindsay, S., Lindsay, S., Feng, S., Lin, S., Zha, S. C., Patil, S., Shankar, S., Zhang, S., Zhang, S., Wang, S., Agarwal, S., Sajuyigbe, S., Chintala, S., Max, S., Chen, S., Kehoe, S., Satterfield, S., Govindaprasad, S., Gupta, S., Deng, S., Cho, S., Virk, S., Subramanian, S., Choudhury, S., Goldman, S., Remez, T., Glaser, T., Best, T., Koehler, T., Robinson, T., Li, T., Zhang, T., Matthews, T., Chou, T., Shaked, T., Vontimitta, V., Ajayi, V., Montanez, V., Mohan, V., Kumar, V. S., Mangla, V., Ionescu, V., Poenaru, V., Mihailescu, V. T., Ivanov, V., Li, W., Wang, W., Jiang, W., Bouaziz, W., Constable, W., Tang, X., Wu, X., Wang, X., Wu, X., Gao, X., Kleinman, Y., Chen, Y. Hu Y. Jia Y. QiY.LiY.ZhangY.ZhangY.AdiY.NamY.Yu Wang,ZhaoY.HaoY.QianY.LiY.HeY.Rait Z.DeVitoZ.RosnbrickZ.WenZ.YangZ.Zhao Z.and MaZ.The llama 3 herd of models2024. URL https://arxiv.org/abs/2407.21783. +Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations (ICLR), 2016. +Hooper, C. R. C., Kim, S., Mohammadzadeh, H., Mahoney, M. W., Shao, S., Keutzer, K., and Gholami, A. KVQuant: Towards 10 million context length LLM inference with KV cache quantization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=OLXotew9Du. +Huang, W., Zheng, X., Ma, X., Qin, H., Lv, C., Chen, H., Luo, J., Qi, X., Liu, X., and Magno, M. An empirical study of llama3 quantization: From llms to mllms, 2024. URL https://arxiv.org/abs/2404.14047. +Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., and Kalenichenko, D. Quantization + +and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. +Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mistral 7b, 2023. URL https://arxiv.org/abs/2310.06825. +Kim, S., Hooper, C., Gholami, A., Dong, Z., Li, X., Shen, S., Mahoney, M., and Keutzer, K. Squeezeellm: Dense-and-sparse quantization. arXiv, 2023. +Lin, J., Tang, J., Tang, H., Yang, S., Chen, W.-M., Wang, W.-C., Xiao, G., Dang, X., Gan, C., and Han, S. Awq: Activation-aware weight quantization for llm compression and acceleration. In MLSys, 2024. +Liu, Z., Zhao, C., Fedorov, I., Soran, B., Choudhary, D., Krishnamoorthi, R., Chandra, V., Tian, Y., and Blankevoort, T. Spinquant: Lm quantization with learned rotations, 2024. URL https://arxiv.org/abs/2405.16406. +Lloyd, S. Least squares quantization in pmc. IEEE Transactions on Information Theory, 28(2):129-137, 1982a. doi: 10.1109/TIT.1982.1056489. +Lloyd, S. Least squares quantization in pmc. IEEE transactions on information theory, 28(2):129-137, 1982b. +MacQueen, J. et al. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pp. 281-297. Oakland, CA, USA, 1967. +Marcus, M. P., Marcinkiewicz, M. A., and Santorini, B. Building a large annotated corpus of english: the penn treebank. Comput. Linguist., 19(2):313-330, June 1993. ISSN 0891-2017. +Max, J. Quantizing for minimum distortion. IRE Transactions on Information Theory, 6(1):7-12, 1960. doi: 10.1109/TIT.1960.1057548. +Merit, S., Xiong, C., Bradbury, J., and Socher, R. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017. URL https://open review.net/forum?id=Byj72udxe. +Nagel, M., Amjad, R. A., Van Baalen, M., Louizos, C., and Blankevoort, T. Up or down? adaptive rounding for post-training quantization. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020. + +Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournaepau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011. +Peng, H., Wu, K., Wei, Y., Zhao, G., Yang, Y., Liu, Z., Xiong, Y., Yang, Z., Ni, B., Hu, J., Li, R., Zhang, M., Li, C., Ning, J., Wang, R., Zhang, Z., Liu, S., Chau, J., Hu, H., and Cheng, P. Fp8-lm: Training fp8 large language models, 2023. +PyTorch. gpt-fast, 2023. URL https://github.com/pytorch-labs/gpt-fast. +PyTorch.torchao,2024. URL https://github.com/pytorch/ao. +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints, 2019. +Rouhani, B., Zhao, R., Elango, V., Shafipour, R., Hall, M., Mesmakhosroshahi, M., More, A., Melnick, L., Golub, M., Varatkar, G., Shao, L., Kolhe, G., Melts, D., Klar, J., L'Heureux, R., Perry, M., Burger, D., Chung, E., Deng, Z., Naghshineh, S., Park, J., and Naumov, M. With shared microexponents, a little shifting goes a long way, 2023. URL https://arxiv.org/abs/2302.08007. +Stock, P., Joulin, A., Gribonval, R., Graham, B., and Jégou, H. And the bit goes down: Revisiting the quantization of neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rJehVyrKwH. +Sun, X., Wang, N., Chen, C.-Y., Ni, J., Agrawal, A., Cui, X., Venkataramani, S., El Maghraoui, K., Srinivasan, V. V., and Gopalakrishnan, K. Ultra-low precision 4-bit training of deep neural networks. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1796-1807. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf. +Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., and Lample, G. Llama: Open and efficient foundation language models, 2023a. URL https://arxiv.org/abs/2302.13971. + +Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Ferrer, C. C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Koura, P. S., Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog, I., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E. M., Subramanian, R., Tan, X. E., Tang, B., Taylor, R., Williams, A., Kuan, J. X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Rodriguez, A., Stojnic, R., Edunov, S., and Scialom, T. Llama 2: Open foundation and fine-tuned chat models, 2023b. URL https://arxiv.org/abs/2307.09288. +Tseng, A., Sun, Q., Hou, D., and Sa, C. D. QTIP: Quantization with trellises and incoherence processing. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=7sdkLVuYCU. +Wang, N., Choi, J., Brand, D., Chen, C.-Y., and Gopalakrishnan, K. Training deep neural networks with 8-bit floating point numbers. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/335d3d1cd7ef05ec77714a215134914c-Paper.pdf. +Wang, S. I., Gu, A., Madaan, L., Hupkes, D., Liu, J., Wei, Y., Jain, N., Lai, Y., Sootla, S., Press, O., Rozière, B., and Synnaeve, G. Eval-Arena: noise and errors on llm evaluations. https://github.com/cruz-eval/eval-arena, 2024. +Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Scao, T. L., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. M. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. +Xia, L., Anthonissen, M., Hochstenbach, M., and Koren, B. A simple and efficient stochastic rounding method for training neural networks in low precision, 2021. URL https://arxiv.org/abs/2103.13445. + +Xiao, G., Lin, J., Seznec, M., Wu, H., Demouth, J., and Han, S. SmoothQuant: Accurate and efficient post-training quantization for large language models. In Proceedings of the 40th International Conference on Machine Learning, 2023. +Xie, Y. and Jabri, M. Analysis of the effects of quantization in multilayer neural networks using a statistical model. IEEE Transactions on Neural Networks, 3(2):334-338, 1992. doi: 10.1109/72.125876. +Yoshida, D. Nf4 isn't information theoretically optimal (and that's good), 2023. URL https://arxiv.org/abs/s/2306.06965. +Zadeh, A. H., Edo, I., Awad, O. M., and Moshovos, A. GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 811-824, Los Alamitos, CA, USA, October 2020. IEEE Computer Society. doi: 10.1109/MICRO50266.2020.00071. URL https://doi.ieeecomputersociety.org/10.1109/MICRO50266.2020.00071. + +# Appendix + +# A. Solution Details + +We provide here more details about our proposed any4 algorithm. + +# A.1. Algorithm + +We summarize our any4 quantization algorithm in Alg. 1. + +Algorithm 1 any4 quantization algorithm. +```python +module2input = calibrate(model, sample_data) +for module in model: + w = module.weight() + wQ = torch.zeros_like(w) + alpha = [] + beta = [] + for i in range(w.shape[0]): + wSi, alpha, betai = scale(w[i,:]) + xi = module2input优惠政策[i] + wQ[i,:] = kmeans( + samples=wSi, + sample_weight=alpha*abs(xi.mean()) + ) + alpha.append(alpha) + beta.append(betai) + module.weight.data = wQ + module.alpha = alpha + module.beta = beta +``` + +# B. Further Results + +# B.1. Comparison with Other Numeric Formats + +We compare our any4 numeric format with other numeric formats for the Llama2 family of models in Table A1 and for Mistral-7B and Mixtral-7B in Table A2. + +
Mistral-7B Instruct v0.2
WikiText-2Perplexity ↓Tasks ↑
C4PTBCodeParrotMMLUHellaSwagGSM8KBigBench
FP165.958.8221.772.6358.7%66.1%41.7%51.7%
INT46.149.0322.022.7057.1%65.1%39.7%50.4%
FP46.199.1021.622.7056.6%64.7%38.2%47.7%
NF46.068.9324.722.6658.0%65.5%38.5%51.8%
ANY46.008.8523.242.6458.6%65.4%41.1%51.7%
Mixtral-8x7B Instruct v0.1
FP164.147.1816.472.2068.2%67.6%64.8%68.1%
INT44.457.4516.842.2666.5%66.3%57.8%61.8%
FP44.467.4818.422.2766.8%66.5%59.4%62.8%
NF44.307.3215.002.2467.6%67.2%61.0%66.5%
ANY44.277.2716.142.2267.7%67.1%62.8%65.8%
+ +Table A2: Quantizing Mistral and Mixtral with various numeric formats. + +
Llama2 7B
Perplexity ↓Tasks ↑
WikiText-2C4PTBCodeParrotHumanEval Pass@1MBPP Pass@1MMLUHellaSwagGSM8KBBH
FP165.476.9720.832.5417.1%20.0%41.3%57.2%13.6%39.8%
INT45.747.3024.002.6310.4%18.2%38.1%56.4%10.6%36.5%
FP45.837.3722.572.6511.0%16.8%36.5%56.6%11.2%35.5%
NF45.667.1922.822.6011.6%19.2%37.4%56.8%12.0%36.8%
ANY45.597.1021.232.5714.0%18.4%40.3%56.7%12.7%36.9%
Llama2 13B
FP164.886.4728.932.4019.5%18.4%50.5%60.0%23.2%47.4%
INT45.056.6530.792.4515.2%16.4%48.8%59.3%20.8%44.2%
FP45.076.6730.962.4615.2%16.2%49.5%59.3%19.3%43.0%
NF44.996.5831.172.4315.9%16.0%49.9%59.9%22.1%44.6%
ANY44.976.5528.832.4215.2%18.0%49.3%59.5%21.6%44.6%
Llama2 70B
FP163.325.5214.442.1131.7%37.4%65.2%64.8%53.3%67.1%
INT43.465.6114.712.1426.8%37.8%64.4%64.7%51.4%65.0%
FP43.535.6714.342.1628.0%30.6%64.1%64.0%51.6%65.0%
NF43.445.6114.652.1429.9%37.2%64.5%63.9%50.6%65.4%
ANY43.405.5814.642.1326.8%35.8%64.8%64.5%51.6%66.6%
+ +# C. Further Ablation Studies + +# C.1. Minimization Terms + +In Table A3 we ablate on using different terms to minimize when learning (using K-means clustering) the LUT of each row in the weight matrix. First row shows the results of optimizing weights directly. The other 2 rows show the results of using the 2 additional terms of Equation 14 in our paper, i.e., multiplying with activations and scales. These results confirm that our derivation that lead to all the terms of Equation 14 is essential for optimal accuracy. + +Table A1: Quantizing Llama2 models with various numeric formats. + +
Llama3.2 1B
Term to MinimizePerplexity ↓
WikiText-2C4PTBCodeParrot
Weights Only(wSi,j - wQi,j)6.6809.61911.1862.751
Weights × Activations(wSi,j xj - wQi,j xj)6.4969.37511.0552.675
Weights × Activations × Group Scales(αi,jwSi,jxj - αi,jwQi,jxj)6.4879.36611.0342.680
[Ours]
+ +Table A3: Perplexity after quantizing Llama3.2 1B with LUTs created by minimizing different terms. + +# C.2. K-Means Initialization + +We use scikit (Pedregosa et al., 2011) to implement K-means clustering, that is core to any4's quantization algorithm. By default, scikit initializes cluster centroids using k-means++ algorithm (Arthur & Vassilvitskii, 2007), but it also supports random initialization, as well as initializing with a vector of pre-defined values. In Table A4 we ablate K-means initialization on Llama 3.2 1B by evaluating k-means++ and random initialization, as well as seeding with uniform int4 values (i.e., integer values -7 to 8), and nf4 values (ranging from -1 to +1). We see that k-means++ performs clearly the best, while uniform int4 initialization performs the worst. + +
Llama3.2 1B
K-Means InitializationPerplexity ↓
WikiText-2C4PTB
FP169.7612.7716.56
ANY4k-means++10.6313.9517.94
ANY4random10.6613.9718.17
ANY4int410.8314.2118.69
ANY4nf410.6513.9618.21
+ +Table A4: any4 quantization with K-means clustering initialized with different algorithms and values. \ No newline at end of file diff --git a/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/images.zip b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a0b1503ca22c064cd8eff1fcc53f3f01ba45b0e7 --- /dev/null +++ b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5860df3ba53e3c9c1977c2c5b5e4609fd58057b7ee172c14b25a263be727ccf +size 797784 diff --git a/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/layout.json b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..89904593b50ce6bfa9b217d6f2f2b1a45e9673d6 --- /dev/null +++ b/ICML/2025/any4_ Learned 4-bit Numeric Representation for LLMs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f60ff2a13d2a1e1c7c4bfebe34685e532077c06b0df7249dff739405c16db758 +size 535826 diff --git "a/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_content_list.json" "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_content_list.json" new file mode 100644 index 0000000000000000000000000000000000000000..88e157d084e590115d81d554903b5589d15f79b3 --- /dev/null +++ "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_content_list.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee247cb4c27eab3a034187e589c4c74e951a7d0b299f45eb2e854fbffc92c7e7 +size 179852 diff --git "a/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_model.json" "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_model.json" new file mode 100644 index 0000000000000000000000000000000000000000..42c7edb6c5ac5bc163eca55bf599efe025cf31ac --- /dev/null +++ "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_model.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f99c13f8272544eb50a5c0226b729f1199a95fa1ba7ff102f95bbba1b8e7a65c +size 212300 diff --git "a/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_origin.pdf" "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_origin.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..4e37fdbb8c5c2531025b72ac9aa3491278527c63 --- /dev/null +++ "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/dabdc8cf-f6a6-4355-a6eb-b0bb1e491482_origin.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac0fc12a97b04730efdd558c55a47b0f1e2a31ecee56b308343eb6ff0e24d197 +size 1874064 diff --git "a/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/full.md" "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/full.md" new file mode 100644 index 0000000000000000000000000000000000000000..29c2b77846fc4d1e708da5ef9d180b4fe74bffa6 --- /dev/null +++ "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/full.md" @@ -0,0 +1,853 @@ +# e-GAI: e-value-based Generalized $\alpha$ -Investing for Online False Discovery Rate Control + +Yifan Zhang1 Zijian Wei1 Haojie Ren1 Changliang Zou2 + +# Abstract + +Online multiple hypothesis testing has attracted a lot of attention in many applications, e.g., anomaly status detection and stock market price monitoring. The state-of-the-art generalized $\alpha$ -investing (GAI) algorithms can control online false discovery rate (FDR) on p-values only under specific dependence structures, a situation that rarely occurs in practice. The e-LOND algorithm (Xu & Ramdas, 2024) utilizes e-values to achieve online FDR control under arbitrary dependence but suffers from a significant loss in power as testing levels are derived from prespecified descent sequences. To address these limitations, we propose a novel framework on valid e-values named e-GAI. The proposed e-GAI can ensure provable online FDR control under more general dependency conditions while improving the power by dynamically allocating the testing levels. These testing levels are updated not only by relying on both the number of previous rejections and the prior costs, but also, differing from the GAI framework, by assigning less $\alpha$ -wealth for each rejection from a risk aversion perspective. Within the e-GAI framework, we introduce two new online FDR procedures, e-LORD and e-SAFFRON, and provide strategies for the long-term performance to address the issue of $\alpha$ -death, a common phenomenon within the GAI framework. Furthermore, we demonstrate that e-GAI can be generalized to conditionally superuniform p-values. Both simulated and real data experiments demonstrate the advantages of both e-LORD and e-SAFFRON in FDR control and power. + +$^{1}$ School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China $^{2}$ School of Statistics and Data Sciences, LPMC, KLMDASR and LEBPS, Nankai University, Tianjin, China. Correspondence to: Haojie Ren . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +# 1. Introduction + +The online multiple hypothesis testing problem arises from a range of applications. For example, regulators record the number of NYC taxi passengers every 30 minutes, aiming to detect the anomalous intervals corresponding to special events (Lavin & Ahmad, 2015); economists build online monitoring procedures based on monthly stock market prices to identify bubbles in financial series (Genoni et al., 2023); industrial factories monitor machine operation status in real time for fault detection, thereby enabling early warnings for potential system issues (Ahmad et al., 2017). These different scenarios can all be formulated as the online multiple testing problem, which is concerned with the investigation of an online sequence of null hypotheses. At each time $t$ , we must immediately make a real-time decision on whether to reject the current hypothesis based on all the observed data so far, without having knowledge of future data or the total number of hypotheses. + +Consider an online sequence of null hypotheses $\mathbb{H}_1, \ldots, \mathbb{H}_t, \ldots$ . Define $\theta_t = 0/1$ if $\mathbb{H}_t$ is true/false for each time $t$ and a class of online decision rules $\delta_t = \{\delta_j : j = 1, \ldots, t\}$ , where $\delta_t = 1$ indicates that $\mathbb{H}_t$ is rejected and $\delta_t = 0$ otherwise. It is necessary to control the error rates of those decisions $\delta_t$ . A natural quantity to control is the false discovery rate (FDR) at target level $\alpha$ as introduced by Benjamini & Hochberg (1995), that is, the ratio of falsely rejected nulls to the total number of rejections. The online FDR is defined as: + +$$ +\operatorname {F D R} (t) = \mathbb {E} [ \operatorname {F D P} (t) ] = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \delta_ {j}}{\left(\sum_ {j = 1} ^ {t} \delta_ {j}\right) \vee 1} \right] \leq \alpha , +$$ + +where $\mathcal{H}_0(t) = \{j\leq t:\theta_j = 0\}$ is the true null set up to time $t$ + +Related works. Methods for online FDR control were pioneered by Foster & Stine (2008), who proposed the so-called $\alpha$ -investing (AI) strategy. It was further extended by the generalized $\alpha$ -investing (GAI) procedure, which has served as the fundamental framework for online testing problem (Aharoni & Rosset, 2014; Javanmard & Montanari, 2018). GAI deals with a sequence of p-values $p_1, \ldots, p_t, \ldots$ and + +encompasses a wide class of algorithms to assign the testing levels $\alpha_{1},\ldots ,\alpha_{t}$ in an online fashion, effectively rejecting the $t$ -th null hypothesis whenever $p_t\leq \alpha_t$ (Ramdas et al., 2017; 2018; Tian & Ramdas, 2019). The GAI ensures provable online FDR control only for the p-values with independence or positive regression dependence on a subset (PRDS; Benjamini & Yekutieli, 2001), which is rarely the case in practice; see definition of PRDS in Appendix B.1. In fact, many applications involve unknown complex dependence, such as time dependence in stock market prices. Hence, it is important to develop online multiple testing procedures under more general dependency conditions. + +Xu & Ramdas (2022) proposed the SupLORD algorithm with the aim of false discovery exceedance control and discovered that, under a weaker baseline assumption, i.e., the null p-values are conditionally super-uniform as formalized in (1) in the subsequent text, it ensures valid FDR control at arbitrary stopping times. However, the SupLORD algorithm necessitates the selection of multiple parameters, which are intricately linked to its performance, and currently lacks well-established criteria for their optimal selection. + +To deal with the problem introduced by dependence, another strategic direction is proposed by Wang & Ramdas (2022) to utilize e-values as potential alternatives to p-values as measures of uncertainty, significance, and evidence. The e-values have gained considerable attention, and many works have devoted significant effort to constructing valid e-values (Vovk & Wang, 2021; Ren & Barber, 2024; Li & Zhang, 2025) and applying e-values to ensure offline FDR control (Wang & Ramdas, 2022). In the online testing problem, a related work is Xu & Ramdas (2024), which exploited e-values and proposed the e-LOND algorithm to provide online FDR control under arbitrary, possibly unknown, dependence. However, the e-LOND algorithm does not make full use of the entire error budget and assigns testing levels only by some pre-specified descending sequences related to the number of rejections. Hence, e-LOND yields conservative FDR and sacrifices power, which hampers its practical use. Xu & Ramdas (2024) proposed to improve e-LOND by incorporating independent randomization, though this operation only achieves a smaller improvement in power while introducing additional randomness. + +Therefore, a natural question is whether it is possible to construct a GAI-like framework based on e-values to achieve online FDR control under more general dependence, i.e., conditional validity in (2), while efficiently and effectively assigning testing levels to achieve high power. + +Our contributions. To address this challenge, this paper proposes a novel framework based on e-values, named e-value-based generalized $\alpha$ -investing (e-GAI). Our contributions are summarized as follows: + +- The e-GAI framework ensures online FDR control based on conditional valid e-values with theoretical guarantees, which is achieved through a new FDP estimator. In contrast to GAI, we propose a novel investing strategy named risk aversion investing (RAI) built on the new FDP estimator, enabling e-GAI to dynamically allocate testing levels based on both prior rejections and costs and assign less $\alpha$ -wealth for each rejection to save budget. +- Within the e-GAI framework, we propose two new algorithms called e-LORD and e-SAFFRON. Furthermore, considering the long-term performance, we propose corresponding algorithms, mem-e-LORD and mem-e-SAFFRON, to address the issue of $\alpha$ -death, a common phenomenon in the GAI framework. +- Moreover, the e-GAI framework can be generalized to conditionally super-uniform p-values while preserving guaranteed FDR control. Numerical results demonstrate that the algorithms within the e-GAI framework are effective for online FDR control and achieve higher power compared to existing methods. + +We compare the e-GAI with several commonly used algorithms and summarize their characteristics in Table 1. + +# 2. Preliminaries + +# 2.1. p-values & e-values + +In this paper, the goal is to make a real-time decision $\delta_t$ while controlling online FDR at a user-specific level $\alpha$ . The rejection decision $\delta_t$ with p-values or e-values is defined as, respectively, + +$$ +\delta_ {t} = \left\{ \begin{array}{l l} \mathbb {1} \left\{p _ {t} \leq \alpha_ {t} \right\}, & \text {i f u s i n g p - v a l u e s ,} \\ \mathbb {1} \left\{e _ {t} \geq \frac {1}{\alpha_ {t}} \right\}, & \text {i f u s i n g e - v a l u e s .} \end{array} \right. +$$ + +Denote $\mathcal{F}_t = \sigma (\delta_1,\ldots ,\delta_t)$ as the sigma-field at time $t$ which is generated by historical decisions in the past. Here the testing level $\alpha_{t}$ is required to be predictable at time $t$ that is $\alpha_{t}$ is $\mathcal{F}_{t - 1}$ -measurable, i.e. $\alpha_{t}\in \mathcal{F}_{t - 1}$ . + +In the studies of online testing, a valid p-value $p_t$ satisfies the conditionally super-uniform property under the null: + +$$ +\mathbb {P} \left(p _ {t} \leq u \mid \mathcal {F} _ {t - 1}\right) \leq u \text {f o r a l l} u \in [ 0, 1 ] \text {i f} \theta_ {t} = 0. \tag {1} +$$ + +Meanwhile, a non-negative variable $e_t$ is a valid e-value if it satisfies the conditional validity: + +$$ +\mathbb {E} \left[ e _ {t} \mid \mathcal {F} _ {t - 1} \right] \leq 1 \text {i f} \theta_ {t} = 0. \tag {2} +$$ + +In contrast to the condition on the distribution of a p-value in (1), (2) only requires that the expectation of the e-value + +Table 1. Online testing algorithms with their properties and performance of FDR control. + +
FrameworkAlgorithmStatisticsDependence conditionsαt relying on prior costs
GAILORD++ (Ramdas et al., 2017)p-valueIndependence or PRDS
SAFFRON (Ramdas et al., 2018)p-valueIndependence or PRDS
SupLORD (Xu & Ramdas, 2022)p-value(1)
e-GAIe-LOND (Xu & Ramdas, 2024)e-valueArbitrary dependencex
e-LORDe-value(2)
e-SAFFRONe-value(2)
+ +exists and is bounded. This relaxed restriction provides greater flexibility in constructing valid e-values for various practical purposes (Vovk & Wang, 2021; Wang & Ramdas, 2022; Ren & Barber, 2024; Li & Zhang, 2025). + +# 2.2. Recap: GAI + +The GAI rules are capable of handling an infinite stream of hypotheses and incorporating informative domain knowledge into a dynamic decision-making process. Beginning with a pre-specified $\alpha$ -wealth, the key idea in GAI algorithms is that each rejection gains some extra $\alpha$ -wealth, which may be subsequently used to make more discoveries at later time points. + +Ramdas et al. (2017) provided a statistical perspective on online FDR procedures and proposed to design new algorithms by keeping an estimate of online FDP less than $\alpha$ . Specifically, Ramdas et al. (2017) proposed an oracle approximation of online FDP as: + +$$ +\mathrm {F D P} ^ {*} (t) = \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \alpha_ {j}}{\left(\sum_ {j = 1} ^ {t} \delta_ {j}\right) \vee 1}. \tag {3} +$$ + +This $\mathrm{FDP}^* (t)$ overestimates the unknown $\mathrm{FDP}(t)$ and provides guidance for online FDR procedures based on independent p-values, including LORD++ (Ramdas et al., 2017), SAFFRON (Ramdas et al., 2018) and ADDIS (Tian & Ramdas, 2019) algorithms. + +Specifically, LORD++ (Ramdas et al., 2017) realizes online FDR control by providing a simple upper bound of $\mathrm{FDP}^* (t)$ : + +$$ +\widehat {\mathrm {F D P}} ^ {\text {L O R D}} (t) = \frac {\sum_ {j = 1} ^ {t} \alpha_ {j}}{\left(\sum_ {j = 1} ^ {t} \delta_ {j}\right) \vee 1}. \tag {4} +$$ + +If the proportion of alternatives is non-negligible, then $\widehat{\mathrm{LORD}}++$ with $\widehat{\mathrm{FDP}}^{\mathrm{LORD}}(t)$ yields very conservative results due to the overestimation of $\mathrm{FDP}^{*}(t)$ . + +Motivated by Storey-BH (Storey, 2002), SAFFRON (Ramdas et al., 2018) was derived from an adaptive upper bound + +estimate by approximating the proportions of nulls: + +$$ +\widehat {\mathrm {F D P}} ^ {\text {S A F F R O N}} (t) = \frac {\sum_ {j = 1} ^ {t} \alpha_ {j} \frac {\mathbb {1} \left\{p _ {j} > \lambda \right\}}{1 - \lambda}}{\left(\sum_ {j = 1} ^ {t} \delta_ {j}\right) \vee 1}, \tag {5} +$$ + +where $\lambda \in (0,1)$ is a user-chosen parameter. + +If the null p-values are independent of each other and of the non-nulls, and $\{\alpha_t\}$ is chosen to be a (coordinate-wise) monotone function of $\delta_{t-1}$ , then LORD++ and SAFFRON control the FDR at all times. Fisher (2024) considered the performance of LORD++ and SAFFRON and proved online FDR control when the popular PRDS condition (Benjamini & Yekutieli, 2001) holds. However, the conditions of independence or PRDS are usually violated in practical applications. + +# 3. e-GAI: e-value-based GAI + +In this section, we first define the oracle estimate of FDP that is tailored for e-values and demonstrate the theoretical results for FDR control (Section 3.1). We then design a new investing strategy based on the new proposed FDP estimator, and propose our testing algorithms, e-LORD (Section 3.2) and e-SAFFRON (Section 3.3), from the risk aversion perspective to optimize the use of a limited budget. + +# 3.1. Online FDR Control with e-values + +Suppose we observe valid e-values $e_1, \ldots, e_t, \ldots$ and make decision $\delta_t = \mathbb{1}\{e_t \geq 1 / \alpha_t\}$ at each time $t$ . Inspired by (3), we define an oracle e-value-based estimate of FDP and bound this overestimate to realize FDR control. Denote $R_t = \sum_{j=1}^{t} \delta_j$ as the rejection size up to $t$ . + +Theorem 3.1. Suppose the online $e$ -values are valid in (2). Let the oracle $e$ -value-based estimate of FDP be given as + +$$ +\mathrm {F D P} _ {\mathrm {e}} ^ {*} (t) = \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\alpha_ {j}}{R _ {j - 1} + 1}. \tag {6} +$$ + +If $\mathbb{E}\left[\mathrm{FDP}_{\mathrm{e}}^{*}(t)\right] \leq \alpha$ , then $\mathrm{FDR}(t) \leq \alpha$ for all $t$ . + +Theorem 3.1 provides a general theoretical result for FDR control with conditionally valid dependent e-values, guiding + +and inspiring the construction of testing levels $\{\alpha_t\}$ , which will be detailed in the following subsection. The proof of Theorem 3.1 and any other necessary proofs will be detailed in Appendix A. + +Before pursuing further, we discuss the effect of the denominator $R_{j-1} + 1$ of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ in (6). To avoid the dependence inflating FDR, the denominator $R_{j-1} + 1$ of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ plays an important role in "predicting" the number of possible future rejections at each rejection moment. As the true denominator $R_{t} \vee 1$ of FDP is unobservable at time $j-1$ and $R_{j-1} + 1 \leq (R_{t} \vee 1)$ holds for each $j \in \{\ell \leq t : \delta_{\ell} = 1\}$ , we use $R_{j-1} + 1$ as a substitute that serves as a $(j-1)$ -measurable lower bound for $R_{t} \vee 1$ . Since it is placed in the denominator, this results in an overestimation of the oracle FDP, which can subsequently be leveraged to achieve FDR control. + +When complex dependence exists, the correlation between the number of false rejections and the total number of rejections cannot be characterized, making it impossible to control their proportion. For instance, consider that the sequential e-values are strongly positively correlated. When a false rejection occurs, it indicates that the next e-value is likely to belong to the null but be falsely rejected as well, leading to an increased FDR. Hence, it implies that one can expect a quantity between $R_{j-1} + 1$ and $R_t$ that provides a more efficient approximation for the denominator of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ when knowing some specific dependence structure among e-values, which warrants further study for improving the efficiency of e-GAI. Typically, the denominator of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ in (6) can be directly chosen as $R_t$ for independent e-values; see details in Appendix B.5. + +# 3.2. e-LORD + +Inspired by the GAI framework, we design testing levels $\alpha_{t}$ by proposing an upper bound for (6) to realize FDR control according to Theorem 3.1. + +One natural overestimate of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ is to define + +$$ +\widehat {\mathrm {F D P}} _ {\mathrm {e}} ^ {\text {L O R D}} (t) := \sum_ {j = 1} ^ {t} \frac {\alpha_ {j}}{R _ {j - 1} + 1}. \tag {7} +$$ + +Since $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t) \geq \mathrm{FDP}_{\mathrm{e}}^{*}(t)$ , any rejection rule algorithm assigning $\alpha_{t}$ in an online fashion such that $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t) \leq \alpha$ holds for all $t$ can control online FDR. + +Proposition 3.2. Suppose online $e$ -values are valid in (2). For $\alpha_{t} \in \mathcal{F}_{t-1}$ satisfying $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t) \leq \alpha$ , we have $\mathrm{FDR}(t) \leq \alpha$ for all $t$ . + +Note that $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t)$ in (7) is constituted by the summation of terms associated with both $\alpha_{t}$ and $R_{t - 1}$ at each + +time point and illustrates that the prior costs associated with each testing (investing) will affect the current test. The target FDR level $\alpha$ is considered as the limited budget ( $\alpha$ -wealth) of the entire testing procedure, and Proposition 3.2 reveals that it cannot be increased once the testing begins. Unlike GAI's updating strategy, we cannot compensate for $\alpha$ -wealth in the subsequent testing process based on the estimate of FDP in e-GAI, as the complex correlations make it difficult to measure the future loss of one false discovery effectively. In contrast, we adopt a risk aversion investing (RAI) strategy to update $\alpha_{t}$ as follows. + +Intuitively, one may update testing levels by allocating a prescribed proportion of the remaining budget to satisfy the condition in Proposition 3.2. Therefore, we dynamically allocate testing levels as $\alpha_{1} = \alpha \omega_{1}$ and for $t\geq 2$ + +$$ +\alpha_ {t} = \omega_ {t} \left(\alpha - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j}}{R _ {j - 1} + 1}\right) (R _ {t - 1} + 1), \tag {8} +$$ + +where $\omega_{1},\ldots ,\omega_{t}\in (0,1)$ control the proportion of the remaining $\alpha$ -wealth allocated to the current testing. It's noted that a larger $\omega_{t}$ indicates that more $\alpha$ -wealth is currently invested, which also implies a greater possibility of rejecting the current hypothesis. However, since rejections are not able to gain additional wealth and each test consumes a proportion $\omega_{t}$ of the remaining wealth, e-GAI views the entire testing process as a risky investment. When failing to reject the current hypothesis, one may consider increasing the investment proportion $\omega_{t}$ to encourage further testing. Upon hypothesis rejection (deciding to invest), this decision carries both the risk of a false discovery and induces a significant downward bias in the denominator of the FDP estimator at earlier time points. As each rejection introduces new risks akin to an investment, we prioritize updating $\omega_{t}$ from the RAI perspective as + +$$ +\begin{array}{l} \omega_ {t + 1} = \omega_ {t} + \omega_ {1} \varphi^ {t - R _ {t}} \left(1 - \delta_ {t}\right) - \omega_ {1} \psi^ {R _ {t}} \delta_ {t} \tag {9} \\ = \omega_ {1} + \omega_ {1} \left(\sum_ {j = 1} ^ {t - R _ {t}} \varphi^ {j} - \sum_ {j = 1} ^ {R _ {t}} \psi^ {j}\right) \\ \end{array} +$$ + +with convention $\sum_{j=1}^{0} \varphi^j = \sum_{j=1}^{0} \psi^j = 0$ , where $\omega_1 \in (0,1)$ is a user-defined initial allocation coefficient, and $\varphi > 0$ , $\psi > 0$ are user-defined parameters that characterize the investment stimulation intensity post-acceptance and the risk regulation level post-rejection, respectively. + +Remark 3.3. To ensure that $\omega_{t} \in (0,1)$ for each time $t$ , we can select any $\omega_{1} \in (0,0.5)$ , $\varphi \in [0,0.5]$ , and $\psi \in [0,0.5]$ . In fact, the conditions can be relaxed to ensure that $\omega_{t}$ is $\mathcal{F}_{t-1}$ -measurable and $\omega_{t} \in (0,1)$ . We suggest choosing $\omega_{1} = O(1/T)$ with the total number of hypotheses $T$ to avoid spending too much wealth in the early stages while retaining sufficiently effective wealth for testing at each + +# Algorithm 1 e-LORD + +1: Input: target FDR level $\alpha$ , initial allocation coefficient $\omega_{1} \in (0, 1)$ , parameters $\varphi$ and $\psi \in (0, 1)$ , sequence of e-values $e_{1}, e_{2}, \ldots$ +2: Calculate $\alpha_{1} = \alpha \omega_{1}$ and decide $\delta_1 = \mathbb{1}\left\{e_1\geq \frac{1}{\alpha_1}\right\}$ ; +3: Update $R_{1} = \delta_{1}$ and $\omega_{2}$ by (9); +4: for $t = 2,3,\ldots$ do +5: Update testing level $\alpha_{t}$ by (8); +6: Make decision $\delta_t = \mathbb{1}\left\{e_t \geq \frac{1}{\alpha_t}\right\}$ ; +7: Update $R_{t} = R_{t - 1} + \delta_{t}$ and $\omega_{t + 1}$ by (9); +8: end for +9: Output: decision set $\{\delta_1, \delta_2, \ldots\}$ . + +time point; more detailed discussions are provided in Appendix B.2. Note that the choice of $\omega_{t}$ satisfying the above conditions does not affect the guarantee of the FDR control. This observation opens up greater flexibility, enhances the applicability of our algorithms, and enables users to leverage domain knowledge for dynamically adjusting the allocation. + +The whole algorithm is referred to as e-LORD and summarized in Algorithm 1. In e-LORD, the testing levels $\{\alpha_t\}$ are updated not only by relying on both the number of previous rejections and the prior costs, but also by assigning less $\alpha$ -wealth for each rejection using the RAI strategy. + +We find that the e-LOND algorithm proposed by Xu & Ramdas (2024) can be converted into the e-LORD algorithm. Let $\alpha_{t}^{\mathrm{e - LOND}}$ and $R_{t}^{\mathrm{e - LOND}}$ denote the testing level and the number of rejections at time $t$ in the eLOND algorithm, respectively. Xu & Ramdas (2024) assigned $\alpha_{t}^{\mathrm{e - LOND}} = \alpha \gamma_{t}\left(R_{t - 1}^{\mathrm{e - LOND}} + 1\right)$ , where $\{\gamma_t\}$ is prespecified non-negative sequence summing to one. It can be verified that $\widehat{\mathrm{FDP}_{\mathrm{e}}}^{\mathrm{LORD}}(t)$ in (7) satisfies + +$$ +\widehat {\mathrm {F D P}} _ {\mathrm {e}} ^ {\mathrm {L O R D}} (t) = \sum_ {j = 1} ^ {t} \frac {\alpha_ {j} ^ {\mathrm {e - L O N D}}}{R _ {j - 1} ^ {\mathrm {e - L O N D}} + 1} = \alpha \sum_ {j = 1} ^ {t} \gamma_ {t} = \alpha . +$$ + +Furthermore, if choosing $\gamma_{t} = \omega_{t}\prod_{j = 1}^{t - 1}(1 - \omega_{j})$ in eLOND with $\{\omega_t\}$ in e-LORD, then we have $\alpha_{t}^{\mathrm{e - LOND}}$ equals $\alpha_{t}$ in (8) for any $t$ ; refer to (18) in Appendix B.6 for more details. In this case, at each time $t$ , the rejection set of eLOND will be identical to the result of Algorithm 1. Thus, we can consider e-LOND as operating on a special type of e-LORD by designing $\omega_{t}$ from a given sequence $\{\gamma_t\}$ . + +# 3.3.e-SAFFRON + +To approximate $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ , the estimate $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t)$ is calculated by summing all non-negative terms over time. Thus, it serves as a crude and conservative overestimate if the proportion of alternatives is non-negligible. Inspired by Storey-BH (Storey, 2002) and SAFFRON (Ramdas et al., + +2018), we further propose an adaptive estimate defined as + +$$ +\widehat {\mathrm {F D P}} _ {\mathrm {e}} ^ {\mathrm {S A F F R O N}} (t) := \sum_ {j = 1} ^ {t} \frac {\alpha_ {j}}{R _ {j - 1} + 1} \frac {\mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\}}{1 - \lambda_ {j}}, +$$ + +where $\{\lambda_t\}_{t=1}^{\infty}$ is a predictable sequence of user-chosen parameters in the interval $(0,1)$ . Here the term adaptive means that it is based on an estimate of the proportion of true nulls as in Storey (2002); Ramdas et al. (2018). + +In contrast to $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t)$ , the summation in $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{SAFFRON}}(t)$ includes only those test levels associated with relatively small e-values. Although $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{SAFFRON}}(t)$ is not necessarily always larger than $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ , we can verify that $\mathbb{E}\left[\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{SAFFRON}}(t)\right] \geq \mathbb{E}\left[\mathrm{FDP}_{\mathrm{e}}^{*}(t)\right]$ , which is sufficient for FDR control according to Theorem 3.1. The properties of the adaptive estimate are formalized below. + +Proposition 3.4. Given a predictable sequence $\{\lambda_t\}_{t=1}^{\infty}$ , if online $e$ -values are valid in (2), then for $\alpha_t \in \mathcal{F}_{t-1}$ satisfying $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{SAFFRON}}(t) \leq \alpha$ , we have: + +(a) $\mathbb{E}\left[\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{SAFFRON}}(t)\right] \geq \mathbb{E}\left[\mathrm{FDP}_{\mathrm{e}}^{*}(t)\right]$ and + +(b) $\mathrm{FDR}(t)\leq \alpha$ forall $t$ + +In the following, we consider $\lambda_t \equiv \lambda \in (0,1)$ for simplicity. Embracing the RAI principle, we propose an adaptive algorithm, called e-SAFFRON. The e-SAFFRON allocates testing levels as $\alpha_1 = \alpha(1 - \lambda)\omega_1$ and for $t \geq 2$ , + +$$ +\alpha_ {t} = \omega_ {t} \left(\alpha (1 - \lambda) - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j} \mathbb {1} \left\{e _ {j} < \frac {1}{\lambda} \right\}}{R _ {j - 1} + 1}\right) \left(R _ {t - 1} + 1\right), \tag {10} +$$ + +where $\omega_{t}\in (0,1)$ is updated by (9). + +We summarize e-SAFFRON in Algorithm 2. In particular, setting $\lambda = 0$ in Algorithm 2 simplifies it to Algorithm 1, demonstrating that e-SAFFRON serves as the adaptive counterpart to e-LORD, similar to the relationship between SAFFRON and LORD++ within the GAI framework. + +Remark 3.5. Note that the choice of $\lambda$ will affect the total "wealth" $\alpha(1 - \lambda)$ , which will be no further increased in the subsequent period of e-SAFFRON. We prefer a relatively small value $\lambda$ to preserve wealth, with $\lambda = 0.1$ as the default choice in our numerical experiments, which differs from the value recommended in the SAFFRON procedure (Ramdas et al., 2018). The latter, SAFFRON with independent p-values, allows for additional rewards when a hypothesis is rejected and suggests $\lambda = 0.5$ . + +# Algorithm 2 e-SAFFRON + +1: Input: target FDR level $\alpha$ , initial allocation coefficient $\omega_{1} \in (0, 1)$ , parameters $\lambda, \varphi$ and $\psi \in (0, 1)$ , sequence of e-values $e_{1}, e_{2}, \ldots$ +2: Calculate $\alpha_{1} = \alpha (1 - \lambda)\omega_{1}$ and decide $\delta_1 = \mathbb{1}\left\{e_1\geq \frac{1}{\alpha_1}\right\}$ ; +3: Update $R_{1} = \delta_{1}$ and $\omega_{2}$ by (9); +4: for $t = 2,3,\ldots$ do +5: Update testing level $\alpha_{t}$ by (10); +6: Make decision $\delta_t = \mathbb{1}\left\{e_t \geq \frac{1}{\alpha_t}\right\}$ ; +7: Update $R_{t} = R_{t - 1} + \delta_{t}$ and $\omega_{t + 1}$ by (9); +8: end for +9: Output: decision set $\{\delta_1, \delta_2, \ldots\}$ . + +# 4. Further Discussions on e-GAI + +In this section, we further investigate the properties of the e-LORD and e-SAFFRON algorithms within the e-GAI framework. + +# 4.1. Long-Term Performance + +We provide strategies for the long-term performance of our methods to address the issue of $\alpha$ -death, halting rejections once $\alpha$ -wealth tends to zero, a common phenomenon within the GAI framework (Ramdas et al., 2017). + +$\alpha$ -death. In a long-term testing process, there may be extended periods during which no hypotheses are rejected, particularly when the true alternatives are rare, leading to a continuous accumulation of the allocation proportion $\omega_{t}$ . As a result, testing levels may become severely diminished in the later stages, making it difficult to achieve any further rejections. This phenomenon is referred to as $\alpha$ -death, which induces a loss of power and ultimately compromises the long-term efficacy of our online testing algorithm. + +mem-FDR control. To alleviate $\alpha$ -death over a long period (potentially infinite), Ramdas et al. (2017) defined decaying memory FDR (mem-FDR) to allow more attention to recent rejections by introducing a user-defined decay parameter $d \in (0, 1]$ and proposed mem-LORD++ that controls mem-FDR under independence. Specifically, mem-FDR is defined as + +$$ +\operatorname {m e m - F D R} (t) := \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} d ^ {t - j} \delta_ {j}}{\sum_ {j = 1} ^ {t} d ^ {t - j} \delta_ {j}} \right]. +$$ + +To address the issue, we adapt e-GAI and design mem-e-GAI to control mem-FDR in our setting. The technique used here is similar to the e-GAI framework to control FDR in Section 3. Denote $R_{t}^{\mathrm{d}} = \sum_{j=1}^{t} d^{t-j} \delta_{j}$ for simplicity. + +Theorem 4.1. Suppose the online $e$ -values are valid in (2). Let the oracle $e$ -value-based estimate of mem-FDP be + +$$ +\operatorname {m e m - F D P} ^ {*} (t) := \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1}. \tag {11} +$$ + +If $\mathbb{E}$ [mem-FDP $^{*}(t)] \leq \alpha$ , then mem-FDR $(t) \leq \alpha$ for all $t$ . + +Theorem 4.1 provides an oracle estimate of mem-FDP, and offers insights and guidance for designing algorithms that control mem-FDR. Note that (11) is facilitated by an understanding of the unknown denominator $R_{t}^{\mathrm{d}}$ of the true mem-FDP. A natural choice that can serve as a $(j - 1)$ -measurable lower bound for predicting $R_{t}^{\mathrm{d}}$ is $d^{t - j}\left(dR_{j - 1}^{\mathrm{d}} + 1\right)$ since this predicted value $d^{t - j}\left(dR_{j - 1}^{\mathrm{d}} + 1\right) \leq \left(R_{t}^{\mathrm{d}} \vee 1\right)$ holds for each $j$ with $\delta_{j} = 1$ . + +mem-e-LORD & mem-e-SAFFRON. Adopting the core idea of the e-GAI framework, we can construct upper bounds for mem-FDP $^{*}$ (t) in (11) and design the testing levels accordingly to achieve mem-FDR control. + +One natural overestimate of mem-FDP\* $(t)$ is + +$$ +\widehat {\mathrm {m e m} - \mathrm {F D P}} ^ {\mathrm {L O R D}} (t) := \sum_ {j = 1} ^ {t} \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \tag {12} +$$ + +and any algorithm is referred to as mem-e-LORD that allocates testing levels $\{\alpha_{t}\}$ satisfying $\mathrm{mem - FDP}^{\mathrm{LORD}}(t)\leq$ $\alpha$ . As an example, adopting the RAI strategy, mem-e-LORD allocates testing levels as $\alpha_{1} = \alpha \omega_{1}$ and for $t\geq 2$ + +$$ +\alpha_ {t} = \omega_ {t} \left(\alpha - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1}\right) \left(d R _ {t - 1} ^ {\mathrm {d}} + 1\right), \tag {13} +$$ + +where $\omega_{t}\in (0,1)$ is updated by (9). + +When the proportion of alternatives is non-negligible, it is preferable to employ an adaptive overestimate defined as + +$$ +\mathrm {m e m -} \widehat {\mathrm {F D P}} ^ {\mathrm {S A F F R O N}} (t) := \sum_ {j = 1} ^ {t} \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \frac {\mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\}}{1 - \lambda_ {j}}, +$$ + +where $\{\lambda_t\}_{t=1}^{\infty}$ satisfying $\lambda_t \in (0,1)$ is a predictable sequence of user-chosen. We refer to an algorithm as mem-e-SAFRON that allocates testing levels $\{\alpha_t\}$ satisfying mem-FDP SAFFRON $(t) \leq \alpha$ . For simplicity, we consider $\lambda_t \equiv \lambda \in (0,1)$ and employ mem-e-SAFRON from RAI perspective, allocating testing levels as $\alpha_1 = \alpha(1 - \lambda)\omega_1$ and for $t \geq 2$ , + +$$ +\alpha_ {t} = \omega_ {t} \left(\alpha (1 - \lambda) - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j} \mathbb {1} \left\{e _ {j} < \frac {1}{\lambda} \right\}}{d R _ {j - 1} ^ {\mathrm {d}} + 1}\right) \left(d R _ {t - 1} ^ {\mathrm {d}} + 1\right), +$$ + +and updating $\omega_{t}\in (0,1)$ as in (9). + +According to Theorem 4.1, both mem-e-LORD and mem-e-SAFFRON achieve mem-FDR control. + +Proposition 4.2. Suppose online $e$ -values are valid in (2). + +(a) For $\alpha_{t} \in \mathcal{F}_{t-1}$ satisfying $\mathrm{mem - FDP}^{\mathrm{LORD}}(t) \leq \alpha$ , we have $\mathrm{mem - FDR}(t) \leq \alpha$ for all $t$ . +(b) Given a predictable sequence $\{\lambda_t\}_{t=1}^{\infty}$ , for $\alpha_t \in \mathcal{F}_{t-1}$ satisfying $\text{mem-FDP}^{\text{SAFFRON}}(t) \leq \alpha$ , we have $\mathbb{E}\left[\text{mem-FDP}^{\text{SAFFRON}}(t)\right] \geq \mathbb{E}\left[\text{mem-FDP}^*(t)\right]$ and $\text{mem-FDR}(t) \leq \alpha$ for all $t$ . + +# 4.2. Extension to p-values + +While the e-GAI framework is initially developed based on the study of e-values, we demonstrate that it can also be generalized to p-values satisfying conditionally superuniformity in (1), enriching the proposed framework and making the theory more comprehensive and complete. + +Suppose we observe valid $p$ -values $p_1, \ldots, p_t, \ldots$ and make decision $\delta_t = \mathbb{1}\{p_t \leq \alpha_t\}$ at each time $t$ . When using $p$ -values for online testing, we demonstrate that $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ in (6) can still serve as an oracle estimate of FDP, in which the denominator involves the number of rejections $R_t$ relevant to $p$ -values. By controlling this estimator to be bounded, we can achieve FDR control for $p$ -values satisfying the conditionally super-uniform property in (1). + +Theorem 4.3. Suppose the online $p$ -values are conditionally super-uniform in (1). If $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ in (6) satisfies $\mathbb{E}\left[\mathrm{FDP}_{\mathrm{e}}^{*}(t)\right] \leq \alpha$ , then $\mathrm{FDR}(t) \leq \alpha$ for all $t$ . + +Theorem 4.3 provides a general theoretical result for FDR control with conditionally super-uniform p-values. Building upon the analogous strategy outlined in Section 3 and Section 4.1, the e-GAI framework can be naturally extended to p-value-compatible algorithms and corresponding versions for the long-term performance. For precise algorithmic differentiation, we denote the p-value-adapted variants of e-LORD and e-SAFFRON as pL-RAI and pS-RAI, respectively, emphasizing the adoption of p-values as the test statistics and the dynamic updating mechanism of testing levels $\alpha_{t}$ through the RAI strategy. We provide more detailed discussions and technical explanations in Appendix B.4. + +# 5. Numerical Experiments + +In this section, we evaluate the performance of our online testing framework on both synthetic and real data. We compare e-LORD, e-SAFFRON, pL-RAI, and pS-RAI with e-LOND, LORD++, SAFFRON, and SupLORD in terms of FDR and power. We validate the performance + +(a) e-value-based methods +![](images/7d8281c511e47ff634f2961c379d749bc60b90dd28ced5acd7b570a292ea98cc.jpg) +e-LORD e-SAFFRON e-LOND + +![](images/26d62770dded6e5a49269249703737ca650c00e4fe9af635ed1eec2e0f9a4e47.jpg) + +(b) p-value-based methods +Figure 1. Empirical FDR and power with standard error versus proportion of alternative hypotheses $\pi_1$ for various online methods, with $\rho = 0.5$ , $L = 30$ and $\mu_{\mathrm{c}} = 3$ . +![](images/3caa23faf7e61edf60c2b9462e73062c560377f17b0937046ff359fc7f112b2d.jpg) +pL-RAI pS-RAI LORD++ SAFFRON SupLORD + +of mem-e-LORD and mem-e-SAFFRON through simulated numerical experiments in Appendix C.2. The code for all numerical experiments in this paper is available at https://github.com/zijianwei01/e-GAI. + +# 5.1. Simulation: Testing with Gaussian Observations + +We use an experimental setup that tests the mean of a Gaussian distribution with the total number of data $T = 500$ . The null hypothesis takes $\mathbb{H}_t : \mu_t = 0$ for each time $t \in [T]$ . The true labels $\theta_t$ are generated from Bernoulli $(\pi_1)$ . The Gaussian variates $(X_1, \ldots, X_T)^\top$ is from $\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ with mean vector $\boldsymbol{\mu} \in \mathbb{R}^T$ and covariance matrix $\boldsymbol{\Sigma} \in \mathbb{R}^{T \times T}$ . The elements in $\boldsymbol{\mu} = (\mu_1, \ldots, \mu_T)^\top$ satisfy $\mu_t = 0$ if $\theta_t = 0$ and $\mu_t = \mu_c > 0$ if $\theta_t = 1$ , where $\mu_c$ is the signal parameter. Additionally, the signals of the true alternatives are correlated with the correlation coefficient $\rho$ . The covariance matrix satisfies $\Sigma \succ 0$ , and $\Sigma_{ij} = \rho^{|i - j|} \cdot \mathbb{1} \{ |i - j| \leq L\}$ . Note that the data at different time points will influence each other, which is in line with real-life online scenarios. + +Under the normality assumption at each time point $t$ , we compute the e-value as the corresponding likelihood-ratio statistic and the p-value by evaluating the conditional distribution. We take $\omega_{1} = 0.005$ , $\varphi = \psi = 0.5$ in e-LORD and pL-RAI, and additionally $\lambda = 0.1$ in e-SAFFRON and pSRAI, while we use default parameters from the R package onlineFDR (Robertson et al., 2022) for other benchmarks. The target FDR level is set as $\alpha = 0.05$ . + +Table 2. Proportion of points rejected out of anomalous regions. e-LORD, e-SAFFRON and e-LOND control the estimated FDP under $\alpha = 0.1$ . e-SAFFRON loses some power due to a tiny proportion of true alternatives. + +
Methode-LORDe-SAFFRONe-LOND
FDP0.0850.0870.061
Num Discovery474633
+ +Figure 1 show the results of various methods with $\rho = 0.5$ and $\mu_{\mathrm{c}} = 3$ . The empirical results show that SAFFRON will inflate the FDR heavily, while others realize the FDR control. However, there is no theoretical guarantee for controlling the FDR with complex dependent data in LORD++ which makes the rejection decisions not as safe as they seem. Owing to dynamically updating the testing levels, both e-LORD and e-SAFFRON lead to much higher power than e-LOND, and pL-RAI and pS-RAI lead to higher power than LORD++ and SupLORD. Similar performance can be found for other settings, as presented in Appendix C.1. + +# 5.2. Real Data: NYC Taxi Anomaly Detection + +We analyze the NYC taxi dataset from the Numenta Anomaly Benchmark (NAB) repository (Lavin & Ahmad, 2015). The dataset captures the number of NYC taxi passengers every 30 minutes from July 1, 2014, to January 31, 2015. Five known anomalous intervals correspond to notable events such as the NYC marathon, Thanksgiving, Christmas, New Year's Day, and a snowstorm. We visualize the data with the known anomalous intervals highlighted using red rectangles in Figure 2. The anomaly detection problem is formulated as an online multiple testing problem. + +We employ the R package `stlplus` to perform STL decomposition (Cleveland et al., 1990) to remove the seasonal and trend components. We derive tests on the residuals, which are assumed to form an independent sequence. The first 2000 time points are taken as the initial sequence for model calibration. We focus on the comparisons among e-value-based methods and apply e-LORD, e-SAFFRON and e-LOND to analyze this dataset. We use the estimated likelihood ratio as e-values (shown in (21)). We choose $\omega_{1} = 0.0001$ and $\lambda = 0.1$ and set both $\psi$ and $\varphi$ as 0.5. + +We compare their performance in terms of the proportion of discoveries out of marked anomalous regions, denoted here as $\widehat{\mathrm{FDP}}$ , and the number of discovered anomalous regions in Table 2. Our e-GAI procedures effectively maintain $\widehat{\mathrm{FDP}}$ below the target level. As illustrated in Figure 2, e-LORD and e-SAFFRON demonstrate higher power than e-LOND, as both identify more points within the anomalous regions, shown in red squares. More comparisons for p-value-based methods are shown in Appendix C.3. + +# 5.3. Real Data: Dating Financial Bubbles + +We follow Genoni et al. (2023), building a sequential test on stock market prices for financial bubbles. Online testing procedures enable decision-making regarding bubble occurrence based on current observation before the subsequent one is observed. Controlling FDR is a proper guarantee to make decisions with a controllable proportion of mistakes. + +The analysis is performed on the monthly stock price of the Nasdaq series. The calibration is implemented by using the first $1/3$ observations, assuming the related period to be free of bubbles. Genoni et al. (2023) uses the standard p-values of the ADF test (R package urca), which are calculated point-wise and thus do not satisfy the conditionally super-uniform property. Moreover, such a valid p-value is difficult to construct for time series. Building on Dickey & Fuller (1981), we reformulate the likelihood ratio for the unit root test as sequential e-values and further normalize them by the estimated conditional expectation under the null to guarantee conditional validity. The task is to identify the bubble beginning date (BBD) and bubble ending date (BED). + +As shown in Figure 3, with a significance level of $\alpha = 0.005$ , e-LOND only detects the initial potential change associated with the gradual emergence of technology companies in the marketplace. The e-LORD and e-SAFFRON exhibit similar behavior, providing a comprehensive characterization of the entire potential bubble influence region by marking a dense rejection region. The date 1990-01-20, corresponding to the onset of the bubble, has been identified, aligning with the classical view. The BED in Figure 3 is determined based on empirical experience due to the long-term effects of the dot-com bubble burst. Specifically, while the burst of the dot-com bubble led to a reset in valuations, NASDAQ's volatility remained elevated, exceeding pre-bubble levels and establishing the groundwork for subsequent financial cycles. As a result, this sustained volatility led to persistent rejections in online procedures even after BBD. The region detected by e-LORD also includes other historical events after BED. + +# 6. Summary + +In this paper, we propose a novel framework named e-GAI that introduces an oracle estimate of online FDP and ensures online FDR control under conditional validity with theoretical guarantees. The e-GAI dynamically allocates testing levels from the RAI perspective, which relies on both the number of previous rejections and prior costs and assigns less $\alpha$ -wealth for each rejection. Within the e-GAI framework, we propose two new algorithms, e-LORD and e-SAFFRON, respectively. Both e-LORD and e-SAFFRON are more powerful than e-LOND under complicated dependence. We also + +![](images/43ae797f7b5dbd5c884e110d0d39e8089b78a2533d8408cf24ed973c0d68393b.jpg) + +![](images/d5308493e5138978e026025af51028cd96eec6f9f571329f4d7bc67b43f7e9cd.jpg) + +![](images/e2690f624461f3adf03d923b026f943cdef506cce4b954b5b0acdb8d3c182435.jpg) +Figure 2. Anomaly points detected by e-LORD (above), e-SAFFRON (middle) and e-LOND (below). Rejection points of all procedures are marked by dark blue points. Red regions refer to known anomalies. The testing level is chosen as 0.1. Red squares indicate additional discovery of e-LORD and e-SAFFRON compared to e-LOND. + +![](images/65139eae00cb002a0e0afa9c250c70af5972a424c20385c349af04dbe80de9c6.jpg) +Figure 3. Dating of BBD and BED in the Nasdaq series. Blue and red points refer to e-LORD and e-LOND rejection points under $\alpha = 0.005$ , respectively. Blue and red regions refer to bubble influence regions detected by e-LORD and e-LOND. Blue and red dashed lines refer to BBD and BED marked by the Nasdaq series. Significant historical events are marked. + +propose mem-e-LORD and mem-e-SAFFRON correspondingly for the long-term performance to alleviate $\alpha$ -death. Moreover, we demonstrate that e-GAI can be generalized to conditionally super-uniform p-values, making e-GAI a more versatile tool with reliable theoretical guarantees and increased practical value. + +We conclude this work with two remarks. Firstly, although + +e-SAFFRON provides an adaptive online FDR procedure, how to more accurately approximate the proportions of nulls like ADDIS (Tian & Ramdas, 2019) deserves further research. Secondly, as discussed in Section 3.1, a more accurate approximation of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ can be expected when knowing a more specific dependence structure. It warrants future study on the relationship between the denominator of $\mathrm{FDP}_{\mathrm{e}}^{*}(t)$ and the dependence structure, and how to design a more efficient strategy to assign testing levels under such cases. + +# Acknowledgements + +We sincerely thank the anonymous reviewers for their insightful comments and constructive suggestions, which have greatly improved the quality of this manuscript. Haojie Ren was supported by the National Key R&D Program of China (Grant No. 2024YFA1012200), the National Natural Science Foundation of China (Grant No. 12471262), the Young Elite Scientists Sponsorship Program by CAST and Shanghai Jiao Tong University 2030 Initiative. Changliang Zou was supported by the National Key R&D Program of China (Grant Nos. 2022YFA1003703, 2022YFA1003800) and the National Natural Science Foundation of China (Grant No. 12231011). + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Aharoni, E. and Rosset, S. Generalized $\alpha$ -investing: definitions, optimality results and application to public databases. Journal of the Royal Statistical Society Series B: Statistical Methodology, 76(4):771-794, 2014. +Ahmad, S., Lavin, A., Purdy, S., and Agha, Z. Unsupervised real-time anomaly detection for streaming data. Neurocomputing, 262:134-147, 2017. +Benjamini, Y. and Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289-300, 1995. +Benjamini, Y. and Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics, 29(4):1165-1188, 2001. +Cleveland, R. B., Cleveland, W. S., McRae, J. E., Terpenning, I., et al. Stl: A seasonal-trend decomposition. J. off. Stat, 6(1):3-73, 1990. +Dickey, D. A. and Fuller, W. A. Likelihood ratio statistics for autoregressive time series with a unit root. *Econometrica*, 49(4):1057-1072, 1981. +Fisher, A. Online false discovery rate control for lord++ and saffron under positive, local dependence. Biometrical Journal, 66(1):2300177, 2024. +Foster, D. P. and Stine, R. A. $\alpha$ -investing: a procedure for sequential control of expected false discoveries. Journal of the Royal Statistical Society Series B: Statistical Methodology, 70(2):429-444, 2008. +Genoni, G., Quatto, P., and Vacca, G. Dating financial bubbles via online multiple testing procedures. *Finance Research Letters*, 58:104238, 2023. +Javanmard, A. and Montanari, A. Online rules for control of false discovery rate and false discovery exceedance. The Annals of statistics, 46(2):526-554, 2018. +Lavin, A. and Ahmad, S. Evaluating real-time anomaly detection algorithms-the numenta anomaly benchmark. In 2015 IEEE 14th international conference on machine learning and applications (ICMLA), pp. 38-44. IEEE, 2015. + +Li, G. and Zhang, X. A note on e-values and multiple testing. Biometrika, 112(1):asae050, 2025. +Ramdas, A., Yang, F., Wainwright, M. J., and Jordan, M. I. Online control of the false discovery rate with decaying memory. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5655-5664, 2017. +Ramdas, A., Zrnic, T., Wainwright, M., and Jordan, M. Saffron: an adaptive algorithm for online control of the false discovery rate. In International Conference on Machine Learning, pp. 4286-4294. PMLR, 2018. +Ren, Z. and Barber, R. F. Derandomised knockoffs: leveraging e-values for false discovery rate control. Journal of the Royal Statistical Society Series B: Statistical Methodology, 86(1):122-154, 2024. +Robertson, D. S., Liou, L., Ramdas, A., and Karp, N. A. onlineFDR: Online error control, 2022. R package 2.5.1. +Shafer, G. Testing by betting: A strategy for statistical and scientific communication. Journal of the Royal Statistical Society Series A: Statistics in Society, 184(2):407-431, 2021. +Shafer, G., Shen, A., Vereshchagin, N., and Vovk, V. Test martingales, bayes factors and p-values. Statistical Science, 26(1):84-101, 2011. +Storey, J. D. A direct approach to false discovery rates. Journal of the Royal Statistical Society Series B: Statistical Methodology, 64(3):479-498, 2002. +Tian, J. and Ramdas, A. Addis: an adaptive discarding algorithm for online fdr control with conservative nulls. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 9388-9396, 2019. +Vovk, V. and Wang, R. E-values: Calibration, combination and applications. The Annals of Statistics, 49(3):1736-1754, 2021. +Wang, R. and Ramdas, A. False discovery rate control with e-values. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(3):822-852, 2022. +Xu, Z. and Ramdas, A. Dynamic algorithms for online multiple testing. In Mathematical and Scientific Machine Learning, pp. 955-986. PMLR, 2022. +Xu, Z. and Ramdas, A. Online multiple testing with e-values. In International Conference on Artificial Intelligence and Statistics, pp. 3997-4005. PMLR, 2024. + +# A. Proofs + +Here, we include all the proofs of the results throughout the paper. + +# A.1. Proof of Theorem 3.1 + +Proof. Suppose a desired level $\alpha$ is given. For all time $t$ , we have + +$$ +\begin{array}{l} \operatorname {F D R} (t) = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \delta_ {j}}{R _ {t} \vee 1} \right] \stackrel {(i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\delta_ {j}}{R _ {j - 1} + 1} \right] = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\mathbb {1} \{e _ {j} \geq \frac {1}{\alpha_ {j}} \}}{R _ {j - 1} + 1} \right] \stackrel {(i i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {e _ {j} \alpha_ {j}}{R _ {j - 1} + 1} \right] \\ = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\mathbb {E} \left[ e _ {j} \mid \mathcal {F} _ {j - 1} \right] \alpha_ {j}}{R _ {j - 1} + 1} \right] \stackrel {(i i i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\alpha_ {j}}{R _ {j - 1} + 1} \right] = \mathbb {E} \left[ \mathrm {F D P} _ {\mathrm {e}} ^ {*} (t) \right] \stackrel {(i v)} {\leq} \alpha , \\ \end{array} +$$ + +where the inequality (i) holds since $R_{j-1} + 1 \leq (R_t \vee 1)$ for every $j \in \{j \leq t : \delta_j = 1\}$ by definition, the inequality (ii) holds since $\mathbb{1}\{y \geq 1\} \leq y$ for any $y > 0$ , the inequality (iii) follows after taking iterated expectations by conditioning on $\mathcal{F}_{j-1}$ and then applying the property of e-values, and inequality (iv) holds by condition. Thus, we finish the whole proof. + +# A.2. Proof of Proposition 3.2 + +Proof. To prove the property, we only need to verify that $\mathbb{E}\left[\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t)\right] \geq \mathbb{E}\left[\mathrm{FDP}_{\mathrm{e}}^{*}(t)\right]$ for all $t$ , then we can obtain the desired result by Theorem 3.1. This holds trivially since $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t) \geq \mathrm{FDP}_{\mathrm{e}}^{*}(t)$ by construction. + +# A.3. Proof of Proposition 3.4 + +Proof. Given a desired level $\alpha$ and predictable sequence $\{\lambda_t\}_{t=1}^{\infty}$ , for all $t$ , + +$$ +\begin{array}{l} \mathbb {E} \left[ \widehat {\mathrm {F D P}} _ {\mathrm {e}} ^ {\mathrm {S A F F R O N}} (t) \right] = \sum_ {j = 1} ^ {t} \mathbb {E} \left[ \frac {\alpha_ {j}}{R _ {j - 1} + 1} \frac {\mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\}}{1 - \lambda_ {j}} \right] \\ \geq \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{R _ {j - 1} + 1} \frac {\mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\}}{1 - \lambda_ {j}} \right] \\ = \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{R _ {j - 1} + 1} \frac {\mathbb {E} \left[ \mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\} \mid \mathcal {F} _ {j - 1} \right]}{1 - \lambda_ {j}} \right] \\ \stackrel {(i)} {\geq} \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{R _ {j - 1} + 1} \right] = \mathbb {E} \left[ \mathrm {F D P} _ {\mathrm {e}} ^ {*} (t) \right], \\ \end{array} +$$ + +where inequality (i) holds because $\mathbb{E}\left[\mathbb{1}\left\{e_j < \frac{1}{\lambda_j}\right\} \mid \mathcal{F}_{j-1}\right] \geq 1 - \lambda_j$ by the property of e-values. This concludes the proof of part (a), and then (b) can be derived by Theorem 3.1, which completes the proof. + +# A.4. Proof of Theorem 4.1 + +Proof. Suppose a desired level $\alpha$ is given. For all time $t$ , we have + +$$ +\begin{array}{l} \operatorname {m e m - F D R} (t) = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} d ^ {t - j} \delta_ {j}}{R _ {t} ^ {\mathrm {d}} \vee 1} \right] \stackrel {(i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {d ^ {t - j} \delta_ {j}}{d ^ {t - j} (d R _ {j - 1} ^ {\mathrm {d}} + 1)} \right] = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\mathbb {1} \{e _ {j} \geq \frac {1}{\alpha_ {j}} \}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \right] \\ \stackrel {(i i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {e _ {j} \alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \right] = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\mathbb {E} [ e _ {j} | \mathcal {F} _ {j - 1} ] \alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \right] \stackrel {(i i i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \right] \\ = \mathbb {E} \left[ \operatorname {m e m - F D P} ^ {*} (t) \right] \stackrel {(i v)} {\leq} \alpha , \\ \end{array} +$$ + +where the inequality (i) holds since $R_{t}^{\mathrm{d}} = \sum_{k=1}^{j-1} d^{t-k} \delta_{k} + d^{t-j} \delta_{j} + \sum_{k=j+1}^{t} d^{t-k} \delta_{k} \geq d^{t-j+1} R_{j-1}^{\mathrm{d}} + d^{t-j} \delta_{j}$ and hence $d^{t-j}(dR_{j-1} + 1) \leq (R_{t}^{\mathrm{d}} \vee 1)$ for every $j \in \{j \leq t : \delta_{j} = 1\}$ by definition, the inequality (ii) holds since $\mathbb{1}\{y \geq 1\} \leq y$ for any $y > 0$ , the inequality (iii) follows after taking iterated expectations by conditioning on $\mathcal{F}_{j-1}$ and then applying the property of e-values, and inequality (iv) holds by condition. Thus, we finish the whole proof. + +# A.5. Proof of Proposition 4.2 + +Proof. (a) To prove the property, we only need to verify that $\mathbb{E}\left[\widehat{\mathrm{mem - FDP}}^{\mathrm{LORD}}(t)\right] \geq \mathbb{E}\left[\mathrm{mem - FDP}^{*}(t)\right]$ for all $t$ , then we can obtain the desired result by Theorem 4.1. This holds trivially since $\widehat{\mathrm{FDP}}^{\mathrm{LORD}}(t) \geq \mathrm{mem - FDP}^{*}(t)$ by construction. + +(b) Given a desired level $\alpha$ and predictable sequence $\{\lambda_t\}_{t=1}^{\infty}$ , for all $t$ , + +$$ +\begin{array}{l} \mathbb {E} \left[ \operatorname {m e m -} \widehat {\mathrm {F D P}} ^ {\mathrm {S A F F R O N}} (t) \right] = \sum_ {j = 1} ^ {t} \mathbb {E} \left[ \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \frac {\mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\}}{1 - \lambda_ {j}} \right] \\ \geq \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \frac {\mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\}}{1 - \lambda_ {j}} \right] \\ = \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \frac {\mathbb {E} \left[ \mathbb {1} \left\{e _ {j} < \frac {1}{\lambda_ {j}} \right\} \mid \mathcal {F} _ {j - 1} \right]}{1 - \lambda_ {j}} \right] \\ \stackrel {(i)} {\geq} \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{d R _ {j - 1} ^ {\mathrm {d}} + 1} \right] = \mathbb {E} [ \text {m e m - F D P} ^ {*} (t) ], \\ \end{array} +$$ + +where inequality (i) holds because $\mathbb{E}\left[\mathbb{1}\left\{e_j < \frac{1}{\lambda_j}\right\} \mid \mathcal{F}_{j-1}\right] \geq 1 - \lambda_j$ by the property of e-values. By Theorem 4.1, we have $\mathrm{mem-FDR}(t) \leq \alpha$ for all $t$ . + +# A.6. Proof of Theorem 4.3 + +Proof. Recall that the decision $\delta_t = \mathbb{1}\{p_t \leq \alpha_t\}$ for each time $t$ . Suppose a desired level $\alpha$ is given. For all time $t$ , we have + +$$ +\begin{array}{l} \operatorname {F D R} (t) = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \delta_ {j}}{R _ {t} \vee 1} \right] \stackrel {(i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\delta_ {j}}{R _ {j - 1} + 1} \right] = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\mathbb {1} \{p _ {j} \leq \alpha_ {j} \}}{R _ {j - 1} + 1} \right] \\ = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\mathbb {E} \left[ \mathbb {1} \left\{p _ {j} \leq \alpha_ {j} \right\} \mid \mathcal {F} _ {j - 1} \right]}{R _ {j - 1} + 1} \right] \stackrel {(i i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\alpha_ {j}}{R _ {j - 1} + 1} \right] = \mathbb {E} \left[ \mathrm {F D P} _ {\mathrm {e}} ^ {*} (t) \right] \stackrel {(i i i)} {\leq} \alpha , \\ \end{array} +$$ + +where the inequality (i) holds since $R_{j - 1} + 1\leq (R_t\lor 1)$ for every $j\in \{j\leq t:\delta_j = 1\}$ by definition, the inequality (ii) follows after taking iterated expectations by conditioning on $\mathcal{F}_{j - 1}$ and then applying the conditionally super-uniform property of p-values, and inequality (iii) holds by condition. Thus, we finish the whole proof. + +# B. Deferred Discussions + +# B.1. Definition of PRDS + +Fisher (2024) introduced an online version of the well-known positive regression dependence on a subset (PRDS) condition proposed by Benjamini & Yekutieli (2001), considered the performance of LORD++ and SAFFRON, and proved online FDR control when the testing statistics, p-values, are conditional PRDS. + +Before formally defining PRDS, it is necessary to first introduce the concept of increasing sets. A set $I \in \mathbb{R}^{K}$ is called increasing if $\mathbf{x} \in I$ implies $\mathbf{y} \in I$ for all $\mathbf{y} \geq \mathbf{x}$ . Here $\mathbf{y} \geq \mathbf{x}$ implies that each component of $\mathbf{y}$ is no smaller than the corresponding component of $\mathbf{x}$ . + +Definition B.1. (Conditional PRDS between p-values; Fisher, 2024) The p-values are conditional PRDS if for each time $t$ , any $j \leq t$ satisfying $j \in \mathcal{H}_0(t)$ , and increasing set $I \subset \mathbb{R}^t$ , the probability $\mathbb{P}\left((p_1, \ldots, p_t) \in I \mid p_j = u, \mathcal{F}_{j-1}\right)$ is non-decreasing in $u$ . + +Wang & Ramdas (2022) discussed the PRDS condition on the studies of e-values in the offline setting. We generalize it to the version applicable to the online scenario. A set $D \in \mathbb{R}^{K}$ is called decreasing if $\mathbf{x} \in I$ implies $\mathbf{y} \in I$ for all $\mathbf{y} \leq \mathbf{x}$ . + +Definition B.2. (Conditional PRDS between e-values) The e-values are conditional PRDS if for each time $t$ , any $j \leq t$ satisfying $j \in \mathcal{H}_0(t)$ , and decreasing set $D \subset \mathbb{R}^t$ , the probability $\mathbb{P}\left((e_1, \ldots, e_t) \in D \mid e_j = u, \mathcal{F}_{j-1}\right)$ is non-increasing in $u$ . + +# B.2. Choices for $\omega_{1}$ + +In this section, we further elaborate on the motivation and advantage of updating $\omega_{t}$ in (9) from the RAI perspective. Building on this, we provide a theoretical justification for the recommended choice of $\omega_{1} = O(1 / T)$ and present supporting experimental results. + +In the e-LORD and e-SAFFRON algorithms, $\omega_{t}$ controls the proportion of the remaining $\alpha$ -wealth allocated to the current testing. A larger $\omega_{t}$ indicates that more $\alpha$ -wealth is currently invested, which also implies a greater possibility of rejecting the current hypothesis, and meanwhile, it will be more possible to exhaust the entire wealth. Therefore, we prioritize updating $\omega_{t}$ from the RAI perspective, dynamically allocating the testing levels and enabling data-driven updates to achieve higher power. In contrast, the testing levels $\alpha_{t}$ in e-LOND are derived from a pre-specified decay sequence that sums to 1 (Xu & Ramdas, 2024). + +A simplified version of (9) is to set $\varphi = \psi = 0$ and thus $\omega_{t} = \omega_{1}$ for all $t$ . In this case, a natural and reasonable way to choose $\omega_{1}$ is to assign equal weight at each time point, i.e., $\omega_{1} = 1 / T$ , motivating the choice of initial value for dynamic updates. + +Empirical results support this analysis. The power results for different choices of $\omega_{1}$ across varying $T$ under an AR(1) model, introduced in Appendix C.2, are shown in Table 3. From Table 3, our algorithms with $\omega_{1} = 1 / T$ achieve the highest power and have the latest time of the last rejection and the largest tail testing level, supporting potential subsequent long-term testing. Moreover, it can be seen that the updates in e-LORD and e-SAFFRON are data-driven: as $T$ varies, the remaining wealth for these algorithms does not change significantly and shows robustness. In contrast, e-LOND uses a pre-specified allocation ratio, and as $T$ increases, the $\alpha$ -wealth at time $T$ of e-LOND diminishes progressively. + +# B.3. Recursive Update Forms of e-LORD & e-SAFFRON + +In this section, we provide recursive update forms of e-LORD and e-SAFFRON, respectively. The computation is highly efficient and memory-friendly since the update of both $\omega_{t}$ (expressed in a recursive form in (9)) and $\alpha_{t}$ can be expressed in a recursive form as follows. + +Table 3. Average results under an AR(1) model over 100 repetitions with $\mu_{\mathrm{c}} = 4$ , $\pi_1 = 0.4$ , and $\alpha = 0.05$ . + +
TMethodω1Power (%)Time of the last rejectionαT/α(×10-4)
500e-LORD1/T70.04981031.0
1/√T22.22750.0
1/T28.64830.7
e-SAFFRON1/T70.54981368.1
1/√T38.74410.0
1/T28.04830.6
e-LOND-30.94912.5
1000e-LORD1/T70.19981029.0
1/√T16.24060.0
1/T24.59620.2
e-SAFFRON1/T70.99981366.7
1/√T28.07060.0
1/T24.29580.2
e-LOND-23.99831.0
+ +To compute $\alpha_{t}$ of e-LORD in Algorithm 1, we define the remaining wealth as + +$$ +\mathrm {r w} _ {t} ^ {\mathrm {e - L O R D}} = \alpha - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j}}{R _ {j - 1} + 1} +$$ + +and update + +$$ +\alpha_ {t} = \omega_ {t} \operatorname {r w} _ {t} ^ {\mathrm {e - L O R D}} (R _ {t - 1} + 1). +$$ + +A similar recursive form of $\alpha_{t}$ of e-SAFFRON in Algorithm 2 can be obtained by defining the remaining wealth as + +$$ +\mathrm {r w} _ {t} ^ {\mathrm {e - S A F F R O N}} = \alpha (1 - \lambda) - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j} \mathbb {1} \{e _ {j} < 1 / \lambda \}}{R _ {j - 1} + 1} +$$ + +and update + +$$ +\alpha_ {t} = \omega_ {t} \operatorname {r w} _ {t} ^ {\mathrm {e} - \text {S A F F R O N}} \left(R _ {t - 1} + 1\right). +$$ + +Through these formulations, the update of testing levels $\alpha_{t}$ is expressed as a recursive relationship based on information from the previous time step, allowing us to compute it recursively and efficiently. + +We evaluate the runtime of various algorithms in the experiments under an AR(1) model, introduced in Appendix C.2, and the results are included in Table 4. It shows that the e-LORD and e-SAFFRON algorithms are computationally efficient. + +Table 4. Average runtime of different algorithms under an AR(1) model over 100 repetitions. + +
e-LORDe-SAFFRONe-LONDLORD++SAFFRONSupLORD
Runtime (×10-4s)9.718.818.09.98.172.1
+ +# B.4. Extension of the e-GAI Framework to p-values + +In this section, we adapt the e-GAI framework to p-values and analyze the corresponding algorithms for the long-term performance following the same strategy in Section 3 and Section 4.1. Recall that the decision $\delta_t = \mathbb{1}\{p_t \leq \alpha_t\}$ for each time $t$ . + +Extension of e-LORD & e-SAFFRON to p-values. Theorem 4.3 in the main text provides a general theoretical result for FDR control with conditionally super-uniform p-values. Leveraging Theorem 4.3, both e-LORD and e-SAFFRON can be adapted to the corresponding version applicable to p-values by replacing $\mathbb{1}\{e_t\geq 1 / \alpha_t\}$ with $\mathbb{1}\{p_t\leq \alpha_t\}$ . Specifically, $\widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t)$ in (7) and the update rule for $\alpha_{t}$ in (8) of the e-LORD algorithm can be directly adapted to p-values. The e-SAFFRON algorithm estimates the proportion of true nulls, requiring a slight modification when converting e-values to p-values. Considering $\lambda_{t}\equiv \lambda$ , the modified FDP overestimate and testing levels of e-SAFFRON are respectively given by $\sum_{j = 1}^{t}\frac{\alpha_{j}}{R_{j - 1} + 1}\frac{\mathbb{1}\{p_{j} > \lambda\}}{1 - \lambda}$ and $\alpha_{t} = \omega_{t}\left(\alpha (1 - \lambda) - \sum_{j = 1}^{t - 1}\frac{\alpha_{j}\mathbb{1}\{p_{j} > \lambda\}}{R_{j - 1} + 1}\right)(R_{t - 1} + 1)$ . + +To clearly distinguish the algorithms, we refer to the versions of e-LORD and e-SAFFRON adapted to p-values as pL-RAI and pS-RAI, respectively, emphasizing that the testing statistics are p-values and testing levels $\alpha_{t}$ are updated by the RAI strategy. Both pL-RAI and pS-RAI can realize provable FDR control under conditional super-uniformity. To avoid notational confusion, we define $\widehat{\mathrm{FDP}}^{\mathrm{pL - RAI}}(t)\coloneqq \widehat{\mathrm{FDP}}_{\mathrm{e}}^{\mathrm{LORD}}(t) = \sum_{j = 1}^{t}\frac{\alpha_{j}}{R_{j - 1} + 1}$ and $\widehat{\mathrm{FDP}}^{\mathrm{pS - RAI}}(t)\coloneqq \sum_{j = 1}^{t}\frac{\alpha_{j}}{R_{j - 1} + 1}\frac{\mathbb{1}\left\{p_{j} > \lambda\right\}}{1 - \lambda}$ . + +Proposition B.3. Suppose the online $p$ -values are conditionally super-uniform in (1). + +(a) For $\alpha_{t} \in \mathcal{F}_{t-1}$ satisfying $\widehat{\mathrm{FDP}}^{\mathrm{pL - RAI}}(t) \leq \alpha$ , we have $\mathrm{FDR}(t) \leq \alpha$ for all $t$ . +(b) Given a predictable sequence $\{\lambda_t\}_{t=1}^{\infty}$ , for $\alpha_t \in \mathcal{F}_{t-1}$ satisfying $\widehat{\mathrm{FDP}}^{\mathrm{pS - RAI}}(t) \leq \alpha$ , we have: (i) $\mathbb{E}\left[\widehat{\mathrm{FDP}}^{\mathrm{pS - RAI}}(t)\right] \geq \mathbb{E}\left[\mathrm{FDP}_\mathrm{e}^*(t)\right]$ where $\mathrm{FDP}_\mathrm{e}^*(t)$ is defined in (6), and (ii) $\mathrm{FDR}(t) \leq \alpha$ for all $t$ . + +The proof strategy for Proposition B.3 follows the same approach as Proposition 3.2 and Proposition 3.4, with the key distinction residing in the application of the inequality $\mathbb{E}\left[\mathbb{1}\left\{p_j > \lambda_j\right\} \mid \mathcal{F}_{j - 1}\right] \geq 1 - \lambda_j$ , which is derived from the conditionally super-uniform property of p-values. Hence, we omit the detailed derivations. + +Long-Term Performance of pL-RAI & pS-RAI. To control mem-FDR using p-values, we demonstrate that mem-FDP $^{*}$ (t) in (11) can still serve as an oracle estimate of mem-FDP. By controlling this estimator to be bounded, we can achieve mem-FDR control for p-values satisfying the conditionally super-uniform property in (1). + +Theorem B.4. Suppose the online $p$ -values are conditionally super-uniform in (1). If $\mathrm{mem - FDP}^* (t)$ in (11) satisfies $\mathbb{E}\left[\mathrm{mem - FDP}^* (t)\right] \leq \alpha$ , then $\mathrm{mem - FDR}(t) \leq \alpha$ for all $t$ . + +Leveraging Theorem B.4, both mem-e-LORD and mem-e-SAFFRON can be adapted to the corresponding version applicable to p-values. Specifically, mem- $\widehat{\mathrm{FDP}}^{\mathrm{LORD}}(t)$ in (12) and the update rule for $\alpha_{t}$ in (13) of the mem-e-LORD algorithm can be directly adapted to p-values. Considering $\lambda_{t} \equiv \lambda$ , the modified mem-FDP overestimate and testing levels of mem-e-SAFFRON are respectively given by $\sum_{j=1}^{t} \frac{\alpha_{j}}{dR_{j-1}^{\mathrm{d}}} + 1 \frac{\mathbb{1}\{p_{j} > \lambda\}}{1 - \lambda}$ and $\alpha_{t} = \omega_{t} \left(\alpha(1 - \lambda) - \sum_{j=1}^{t-1} \frac{\alpha_{j} \mathbb{1}\{p_{j} > \lambda\}}{dR_{j-1}^{\mathrm{d}}} + 1\right) (dR_{t-1}^{\mathrm{d}} + 1)$ . + +To clearly distinguish the algorithms, we refer to the versions of mem-e-LORD and mem-e-SAFFRON adapted to p-values as mem-pL-RAI and mem-pS-RAI, respectively. Both mem-pL-RAI and mem-pS-RAI can realize provable mem-FDR control under conditional super-uniformity. To avoid notational confusion, we define mem- $\widehat{\mathrm{FDP}}^{\mathrm{pL - RAI}}(t)\coloneqq$ mem- $\widehat{\mathrm{FDP}}^{\mathrm{LORD}}(t) = \sum_{j = 1}^{t}\frac{\alpha_{j}}{dR_{j - 1}^{\mathrm{d}}} +1$ and mem- $\widehat{\mathrm{FDP}}^{\mathrm{pS - RAI}}(t)\coloneqq \sum_{j = 1}^{t}\frac{\alpha_{j}}{dR_{j - 1}^{\mathrm{d}}} +1\frac{\mathbb{1}\{p_{j} > \lambda\}}{1 - \lambda}.$ + +Proposition B.5. Suppose the online $p$ -values are conditionally super-uniform in (1). + +(a) For $\alpha_{t} \in \mathcal{F}_{t-1}$ satisfying $\mathrm{mem - \widehat{FDP}}^{\mathrm{pL - RAI}}(t) \leq \alpha$ , we have $\mathrm{mem - FDR}(t) \leq \alpha$ for all $t$ . +(b) Given a predictable sequence $\{\lambda_t\}_{t=1}^{\infty}$ , for $\alpha_t \in \mathcal{F}_{t-1}$ satisfying $\text{mem-FDP}^{pS-RAI}(t) \leq \alpha$ , we have: (i) $\mathbb{E}\left[\text{mem-FDP}^{pS-RAI}(t)\right] \geq \mathbb{E}\left[\text{mem-FDP}^*(t)\right]$ where $\text{mem-FDP}^*(t)$ is defined in (11), and (ii) $\text{mem-FDR}(t) \leq \alpha$ for all $t$ . + +We omit the proofs of Theorem B.4 and Proposition B.5, as they follow analogous reasoning and use identical techniques to those previously demonstrated. + +# B.5. Connection to Existing Methods in GAI under Independence + +In this section, we demonstrate the FDR control with independent e-values, and compare the e-GAI framework with the GAI methods, highlighting that e-GAI is a unified framework. + +When working with independent e-values, the correlation between the number of false rejections and the total number of rejections can be analyzed using the leave-one-out technique (Ramdas et al., 2018, Lemma 1), which ensures the control of FDR. This property is formalized as follows, paralleling Theorem 3.1 in the independent situation. + +Theorem B.6. Suppose online $e$ -values are valid in (2), and the null $e$ -values are independent of each other and of the non-nulls. Under independence, let the oracle $e$ -value-based estimate of FDP be given as + +$$ +\mathrm {F D P} _ {\mathrm {e} - \mathrm {i n d}} ^ {*} (t) = \sum_ {j \in \mathcal {H} _ {0} (t)} \frac {\alpha_ {j}}{R _ {t} \vee 1}. \tag {14} +$$ + +If $\mathbb{E}\left[\mathrm{FDP}_{\mathrm{e - ind}}^{*}(t)\right] \leq \alpha$ and $\{\alpha_{t}\}$ is a monotone function of $\delta_{t - 1}$ for all $t$ , then $\mathrm{FDR}(t) \leq \alpha$ for all $t$ . + +Proof. To control online FDR, we have + +$$ +\begin{array}{l} \operatorname {F D R} (t) = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \delta_ {j}}{R _ {t} \vee 1} \right] = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {1} \left\{e _ {j} \geq \frac {1}{\alpha_ {j}} \right\}}{R _ {t} \vee 1} \right] = \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\mathbb {1} \left\{\frac {1}{e _ {j}} \leq \alpha_ {j} \right\}}{R _ {t} \vee 1} | \mathcal {F} _ {j - 1} \right] \right] \\ \stackrel {(i)} {\leq} \mathbb {E} \left[ \sum_ {j \in \mathcal {H} _ {0} (t)} \mathbb {E} \left[ \frac {\alpha_ {j}}{R _ {t} \vee 1} \mid \mathcal {F} _ {j - 1} \right] \right] = \mathbb {E} \left[ \frac {\sum_ {j \in \mathcal {H} _ {0} (t)} \alpha_ {j}}{R _ {t} \vee 1} \right] \leq \alpha , \\ \end{array} +$$ + +where the inequality (i) uses the transformation from e-values into p-values and the leave-one-out technique as in the (Ramdas et al., 2017, Lemma 1) and (Ramdas et al., 2018, Lemma 1) due to the independence. Thus we finish the whole proof. + +Theorem B.6 builds a bridge between our framework and previous methods based on p-values in prior works introduced in Section 2.2. Specifically, we established the connection between e-LORD and LORD, as well as between e-SAFFRON and SAFFRON, respectively. + +e-LORD & LORD++. Within the e-GAI framework, we can overestimate $\mathrm{FDP}_{\mathrm{e - ind}}^{*}(t)$ in (14) by $\widehat{\mathrm{FDP}}_{\mathrm{e - ind}}^{\mathrm{LORD}}(t) := \frac{\sum_{j=1}^{t} \alpha_{j}}{R_{t} \vee 1}$ , same as $\widehat{\mathrm{FDP}}^{\mathrm{LORD}}(t)$ in (4), in the independent scenario. Then the process of generating testing levels $\{\alpha_{t}\}$ by $\mathrm{LORD}++$ (Ramdas et al., 2017) can be regarded as the e-LORD algorithm, as it satisfies the condition $\widehat{\mathrm{FDP}}_{\mathrm{e - ind}}^{\mathrm{LORD}}(t) \leq \alpha$ . Therefore, $\mathrm{LORD}++$ can be included within the e-GAI framework. + +Furthermore, we can show the equivalence between e-LORD and LORD++. On the one hand, when the independent e-values $\{e_t\}$ are available at each time, then $1 / e_t$ is a valid p-value by (23) in Appendix B.7, and applying LORD++ to $\{1 / e_t\}$ is equivalent to applying e-LORD to $\{e_t\}$ , wherein the testing levels $\{\alpha_t\}$ for e-LORD are derived following the LORD++ procedure (Ramdas et al., 2017), i.e., + +$$ +\alpha_ {t} ^ {\mathrm {L O R D} + +} = \gamma_ {t} W _ {0} + (\alpha - W _ {0}) \gamma_ {t - \tau_ {1}} + \alpha \sum_ {j: \tau_ {j} < t, \tau_ {j} \neq \tau_ {1}} \gamma_ {t - \tau_ {j}}, \tag {15} +$$ + +where $W_0 > 0$ is the initial $\alpha$ -wealth, $\tau_t$ is the time of the $t$ -th rejection and $\{\gamma_t\}$ is pre-specified non-negative sequence summing to one. + +On the other hand, when the independent p-values $\{p_t\}$ are available instead, then $\mathbb{1}\{p_t \leq \alpha_t\} / \alpha_t$ is a valid e-value. This is because + +$$ +\mathbb {E} \left[ \frac {\mathbb {1} \{p _ {t} \leq \alpha_ {t} \}}{\alpha_ {t}} \mid \mathcal {F} _ {t - 1} \right] \leq 1 +$$ + +due to the conditionally super-uniform property of $p_t$ . Define $e_t = \mathbb{1}\{p_t \leq \alpha_t\} / \alpha_t$ and applying LORD++ to $\{p_t\}$ is equivalent to applying e-LORD to $\{e_t\}$ with the testing levels $\{\alpha_t\}$ generated in the same manner as LORD++ (Ramdas + +et al., 2017). To prove it, denote $\delta_t^p = \mathbb{1}\{p_t\leq \alpha_t^p\}$ and $\delta_t^e = \mathbb{1}\{e_t\geq \frac{1}{\alpha_t^e}\}$ respectively to easily distinguish. It can be readily verified that $\delta_1^p = \delta_1^e$ . Suppose for all $j\leq t - 1$ , we have $\delta_j^p = \delta_j^e$ . Then for time $t$ $\alpha_{t}^{p} = \alpha_{t}^{e}$ are both generated by the same LORD++ algorithm in (15) and hence we omit the superscript. If $\delta_t^p = 1$ , it follows that $p_t\leq \alpha_t$ , which implies that $e_t = 1 / \alpha_t$ and consequently $\delta_t^e = 1$ . Conversely, if $\delta_t^p = 0$ , it also holds that $\delta_t^e = 0$ . The proof can then be concluded through recursive reasoning. + +e-SAFFRON & SAFFRON. It is natural to derive the relationship between e-SAFFRON and SAFFRON. In the independent scenario, we can overestimate $\mathrm{FDP}_{\mathrm{e - ind}}^{*}(t)$ in (14) by $\widehat{\mathrm{FDP}}_{\mathrm{e - ind}}^{\mathrm{SAFFRON}}(t) := \frac{\sum_{j=1}^{t}\alpha_{j}\frac{1\{e_{j}<1 / \lambda\}}{1-\lambda}}{R_{t}\vee 1}$ , same as $\widehat{\mathrm{FDP}}^{\mathrm{SAFFRON}}(t)$ in (5) when $p_{t}=1/e_{t}$ . Therefore, the process of generating testing levels $\{\alpha_{t}\}$ by SAFFRON (Ramdas et al., 2018) can be regarded as the e-SAFFRON algorithm as it satisfies $\widehat{\mathrm{FDP}}_{\mathrm{e - ind}}^{\mathrm{SAFFRON}}(t) \leq \alpha$ . + +However, the conclusion on the equivalence between e-SAFRON and SAFFRON differs from that between e-LORD and LORD++. When the independent e-values $\{e_t\}$ are available at each time, applying SAFFRON to valid p-values $\{p_t = 1 / e_t\}$ is equivalent to applying e-SAFRON to $\{e_t\}$ , wherein the testing levels $\{\alpha_t\}$ for e-SAFRON are derived following the SAFFRON procedure (Ramdas et al., 2018), i.e., + +$$ +\alpha_ {t} ^ {\text {S A F F R O N}} = \min \left\{\lambda , W _ {0} \gamma_ {t - C _ {0 +}} + ((1 - \lambda) \alpha - W _ {0}) \gamma_ {t - \tau_ {1} - C _ {1 +}} + \sum_ {j \geq 2} (1 - \lambda) \alpha \gamma_ {t - \tau_ {j} - C _ {j +}} \right\}. \tag {16} +$$ + +Here $\lambda \in (0,1)$ is a user-chosen parameter and $C_{i + }(t) = \sum_{j = \tau_i + 1}^{t - 1}\mathbb{1}\{p_j\leq \lambda \}$ . + +The situation changes when the independent p-values are available. In addition to the conditions of independence, if the p-values further satisfy the conditionally uniformly distributed in (1), then there are no valid e-values in (2) such that applying SAFFRON to $\{p_t\}$ is equivalent to applying e-SAFFRON to those e-values, wherein the testing levels $\{\alpha_t\}$ for e-SAFFRON are derived following the SAFFRON procedure (Ramdas et al., 2018). We demonstrate it by contradiction. Suppose there exists valid e-values $\{e_t\}$ with $e_t = f^c(p_t)$ such that applying SAFFRON to $\{p_t\}$ is equivalent to applying e-SAFFRON to $\{e_t\}$ , wherein the testing levels $\{\alpha_t\}$ for e-SAFFRON are derived following the SAFFRON procedure in (16). Since the p-values are conditionally uniformly distributed, the p-values are also continuously distributed. Note that the p-values play three different roles in the SAFFRON algorithm: rejection if $p_t \leq \alpha_t$ , candidate if $\alpha < p_t \leq \lambda$ , and inclusion in the estimate of FDP* if $p_t > \lambda$ . Correspondingly, the e-values $f^c(p_t)$ satisfy $f^c(p_t) \geq 1 / \alpha_t$ if $p_t \leq \alpha_t$ , $f^c(p_t) \geq 1 / \lambda$ if $\alpha < p_t \leq \lambda$ , and $f^c(p_t) < 1 / \lambda$ if $p_t > \lambda$ . By the above assumptions, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ f ^ {\mathrm {c}} \left(p _ {t}\right) \right] = \int f ^ {\mathrm {c}} (p) \mathbb {1} \left\{p \leq \alpha_ {t} \right\} d p + \int f ^ {\mathrm {c}} (p) \mathbb {1} \left\{\alpha_ {t} < p \leq \lambda \right\} d p + \int f ^ {\mathrm {c}} (p) \mathbb {1} \left\{p > \lambda \right\} d p \\ \geq \int \frac {1}{\alpha_ {t}} \mathbb {1} \{p \leq \alpha_ {t} \} d p + \int \frac {1}{\lambda} \mathbb {1} \{\alpha_ {t} < p \leq \lambda \} d p + \int f ^ {\mathrm {c}} (p) \mathbb {1} \{p > \lambda \} d p \\ \geq 1 + \int \frac {1}{\lambda} \mathbb {1} \left\{\alpha_ {t} < p \leq \lambda \right\} d p \\ > 1 \\ \end{array} +$$ + +with $\alpha_{t}$ generated by SAFFRON, which contradicts the assumption that $f^{\mathrm{c}}(p_t)$ is a valid e-value. In summary, there are no valid e-values transformed by p-values that also serve the three roles accordingly in e-SAFFRON. This conclusion implies that there may be a loss of power when testing by e-values compared to using p-values in the independent case. + +# B.6. Alternative Expressions of e-LORD & e-SAFFRON + +In this section, we provide alternative expressions of e-LORD and e-SAFFRON, respectively. These expressions may aid in better understanding the dynamics of the allocation process and how the e-LOND algorithm can be considered as a special case within the e-GAI framework with design $\omega_{t}$ from a given sequence $\{\gamma_t\}$ . + +e-LORD. Recall the e-LORD algorithm updates testing levels as $\alpha_{1} = \alpha \omega_{1}$ and for $t\geq 2$ + +$$ +\alpha_ {t} = \omega_ {t} \left(\alpha - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j}}{R _ {j - 1} + 1}\right) (R _ {t - 1} + 1), \tag {17} +$$ + +where $\omega_{1},\ldots ,\omega_{t}\in \mathcal{F}_{t - 1}$ are the allocation coefficients, updated as in (9). By recursion and calculation, (17) can be simplified to + +$$ +\alpha_ {t} = \alpha \left(R _ {t - 1} + 1\right) \omega_ {t} \prod_ {j = 1} ^ {t - 1} \left(1 - \omega_ {j}\right). \tag {18} +$$ + +e-SAFFRON. We follow the same routine as above. Recall the e-SAFFRON algorithm updates testing levels as $\alpha_{1} = \alpha (1 - \lambda)\omega_{1}$ and for $t\geq 2$ .. + +$$ +\alpha_ {t} = \omega_ {t} \left(\alpha (1 - \lambda) - \sum_ {j = 1} ^ {t - 1} \frac {\alpha_ {j} \mathbb {1} \left\{e _ {j} < \frac {1}{\lambda} \right\}}{R _ {j - 1} + 1}\right) \left(R _ {t - 1} + 1\right), \tag {19} +$$ + +where $\omega_{1},\ldots ,\omega_{t}\in \mathcal{F}_{t - 1}$ are the allocation coefficients, updated as in (9). By recursion and calculation, (19) can be simplified to + +$$ +\alpha_ {t} = \alpha (1 - \lambda) \left(R _ {t - 1} + 1\right) \omega_ {t} \prod_ {j = 1} ^ {t - 1} \left(1 - \omega_ {j} \mathbb {I} \left\{e _ {j} < 1 / \lambda \right\}\right). \tag {20} +$$ + +(18) and (20) provide an alternative perspective for understanding the allocation process. In addition to the item regarding the number of previous rejections, the remaining items focus on the process of wealth allocation, in which the total wealth $\alpha$ is dynamically allocated by $\omega_{t}$ and the prior allocation coefficients $\{\omega_{j}\}_{j = 1}^{t - 1}$ . + +# B.7. Construction of Online e-values + +In this section, we offer useful suggestions about the construction of valid e-values that can be applied to the online setting. Recall that a non-negative variable $e_t$ is a valid online e-value if it satisfies the conditional validity property: + +$$ +\mathbb {E} \left[ e _ {t} \mid \mathcal {F} _ {t - 1} \right] \leq 1 \text {i f} \theta_ {t} = 0. +$$ + +Likelihood ratio e-values. Assume that the null conditional distribution $F_{0,t|\mathcal{F}_{t - 1}}$ is known for all $t$ and denote the corresponding density function $f_{0,t|\mathcal{F}_{t - 1}}$ . At each time $t$ , the conditional distribution $F_{t|\mathcal{F}_{t - 1}}$ and the corresponding density $f_{t|\mathcal{F}_{t - 1}}$ are unknown, and we can estimate $f_{t|\mathcal{F}_{t - 1}}$ using parametric or nonparametric methods. Then we can construct the likelihood ratio e-value as + +$$ +e _ {t} = \frac {\hat {f} _ {t | \mathcal {F} _ {t - 1}} \left(x _ {t}\right)}{f _ {0 | \mathcal {F} _ {t - 1}} \left(x _ {t}\right)} \tag {21} +$$ + +for each time $t$ . The e-value $e_t$ in (21) is valid since + +$$ +\mathbb {E} \left[ e _ {t} \mid \mathcal {F} _ {t - 1} \right] = \int \frac {\hat {f} _ {t | \mathcal {F} _ {t - 1}} (x _ {t})}{f _ {0 | \mathcal {F} _ {t - 1}} (x _ {t})} f _ {0 | \mathcal {F} _ {t - 1}} (x _ {t}) d x _ {t} = \int \hat {f} _ {t | \mathcal {F} _ {t - 1}} (x _ {t}) d x _ {t} = 1. +$$ + +p-value-based e-values. In case the p-values associated with each hypothesis are available, it is possible to convert them to e-values using a 'p-to-e calibrator' (Shafer et al., 2011), albeit with possible power loss (Vovk & Wang, 2021). A 'p-to-e calibrator' is a decreasing function $f^{\mathrm{c}}:[0,1]\mapsto [0,\infty]$ , such that $\int_0^1 f^{\mathrm{c}}(s)ds = 1$ . Then we can construct the p-value-based e-value as + +$$ +e _ {t} = f ^ {\mathrm {c}} \left(p _ {t}\right) \tag {22} +$$ + +for each time $t$ , where $p_t$ is the corresponding p-value. The e-value $e_t$ in (22) is valid as long as the p-value $p_t$ is conditionally super-uniformly distributed under the null. Note that the choices for $f^{\mathrm{c}}$ vary. For example, we can simply take $f^{\mathrm{c}}(s) = \eta s^{\eta - 1}$ for some $\eta \in (0, 1)$ (Shafer, 2021; Vovk & Wang, 2021). + +Note that an e-value $e_t$ can be naturally transformed into a p-value $p_t$ by $p_t = \min \{1 / e_t, 1\}$ . This is because + +$$ +\mathbb {P} \left(p _ {t} \leq u \mid \mathcal {F} _ {t - 1}\right) \leq \mathbb {P} \left(e _ {t} \geq 1 / u \mid \mathcal {F} _ {t - 1}\right) \leq u \mathbb {E} \left[ e _ {t} \mid \mathcal {F} _ {t - 1} \right] \leq u \tag {23} +$$ + +for all $u\in (0,1)$ by Markov's inequality. + +![](images/f6b9035ff9e8f6adc2c43135fd2104ef2fd6fec57e0c99928e816e0e73b1b5c8.jpg) +(a) $\omega_{1} = 0.001$ + +![](images/af79b79d26c47dd6dbe56bce6bb4ae5fcefb04a7387611c2bc43a86a04bfb75c.jpg) + +![](images/d3a9dffb0346bf76d3d7e5c126f33f0e128a7f8cf46d656818168457978db2e2.jpg) +(b) $\omega_{1} = 0.005$ + +![](images/015a12287ddb46eb39a4a692eefabf7e80f05aa12cc3a00f9c9c98b7db875474.jpg) + +![](images/19c1bec99e2353ae55c59f4daab019772d2c13c22ab37188c18c5f678689a46a.jpg) +(c) $\omega_{1} = 0.01$ + +![](images/79a6c195a5fbb88b7fba8d687b053d2959a61b33396bad9408d3b6b05e685feb.jpg) +Figure 4. Empirical FDR and power with standard error versus $\psi$ for e-LORD, with $\rho = 0.5$ , $L = 30$ , $\mu_{\mathrm{c}} = 3$ and $\pi_1 = 0.2$ . The $\varphi_0, \varphi_1, \ldots, \varphi_5$ methods correspond to $\varphi = 0, 0.1, \ldots, 0.5$ , respectively. The parameter configuration ( $\omega_1 = 0.005$ , $\phi = 0.5$ , $\varphi = 0.5$ ) yielded the most favorable performance. + +![](images/6cf30e46c8941c98071cfef53cce9425b62fcf700e3dc3d1c52371c1c5568272.jpg) +(d) $\omega_{1} = 0.05$ + +![](images/cb022127977784ed8db173b31ac2d0ddd9c978e35c153fab5aa5e651d82635db.jpg) + +# C. Additional Simulation Results + +# C.1. Simulation: FDR control + +In this section, we investigate the impact of different parameters on the performance of our proposed method and, based on our findings, provide recommendations for appropriate parameter choices. Simulation results under other settings are also present to show the performance of e-LORD, e-SAFFRON, pL-RAI, pS-RAI and other algorithms. + +Updating $\omega_{t}$ with different parameters $\omega_{1},\varphi ,\psi$ . To investigate the performance of e-GAI with varying parameters $\omega_{1},\varphi ,\psi$ , we conduct a series of experiments on e-LORD, e-SAFFRON $(\lambda = 0.1)$ , pL-RAI and pS-RAI $(\lambda = 0.1)$ , following the settings outlined in Section 5.1, with $\alpha = 0.05,\rho = 0.5,L = 30,\mu_{\mathrm{c}} = 3$ , and $\pi_1 = 0.2$ + +We vary $\omega_{1}$ , $\varphi$ , and $\psi$ , comparing the performance of the e-LORD, e-SAFFRON, pL-RAI, and pS-RAI under various parameter settings. Figures 4 to 7 show the results of e-LORD, e-SAFFRON, pL-RAI, and pS-RAI with different parameters, respectively. As shown in Figures 4 to 7, the e-LORD, e-SAFFRON, pL-RAI, and pS-RAI algorithms successfully achieve FDR control across all settings, which aligns with the theoretical guarantees. The power of the algorithm is affected by variations in the parameters $\omega_{1}, \varphi, \psi$ . When $\omega_{1} = 0.005$ , $\varphi = \psi = 0.05$ , all of e-LORD, e-SAFFRON, pL-RAI, and pS-RAI demonstrate high statistical power. Therefore, the parameter settings are chosen as $\omega_{1} = 0.005$ , $\varphi = \psi = 0.05$ in our experiments. + +Using different constant parameter $\lambda$ in e-SAFFRON & pS-RAI. To evaluate the performance of e-SAFFRON and pS-RAI with different values of $\lambda$ , we adopt the same experimental settings as described in Section 5.1. + +Figure 8 show the results of e-SAFFRON and pS-RAI with different $\lambda$ . As illustrated in Figure 8, both e-SAFFRON and pS-RAI maintain FDR control across all different values of $\lambda$ , which is consistent with the theoretical guarantees. It is evident that when $\lambda = 0.1$ , both the e-SAFFRON and pS-RAI algorithms attain the highest statistical power. Therefore, we recommend setting $\lambda = 0.1$ as the default value when applying the e-SAFFRON and pS-RAI algorithms. + +Setting different correlation parameter $\rho$ . To assess the performance of e-LORD, e-SAFFRON, pL-RAI and pS-RAI under varying levels of dependence, we adopt the same experimental settings as described in Section 5.1, and compare them + +![](images/1a92b3b5b5b9604763262be0dc2a8e5d5250d860988007ffba55fec30099d0e1.jpg) +(a) $\omega_{1} = 0.001$ + +![](images/12267ce151d3a72582b670781aef99f4c610701d8fb36853e4993b091bfdb882.jpg) +(b) $\omega_{1} = 0.005$ + +![](images/51cab10bd860c3e8591a01af0614eef090d24c6b3f7e3488feb36e6867bfa8e2.jpg) +(c) $\omega_{1} = 0.01$ + +![](images/684a9db3641a2aab849d8b7287d63a13ea95a8cf99c446a980207a5c11e389e9.jpg) +(d) $\omega_{1} = 0.05$ +Figure 5. Empirical FDR and power with standard error versus $\psi$ for e-SAFFRON, with $\lambda = 0.1$ , $\rho = 0.5$ , $L = 30$ , $\mu_{\mathrm{c}} = 3$ and $\pi_1 = 0.2$ . The $\varphi_0, \varphi_1, \ldots, \varphi_5$ methods correspond to $\varphi = 0, 0.1, \ldots, 0.5$ , respectively. The parameter configuration ( $\omega_1 = 0.005$ , $\phi = 0.5$ , $\varphi = 0.5$ ) yielded the most favorable performance. + +![](images/4785e5c535f06f0d493a90fa4b28a1d631d4636d3bfeb60ab1ddb09f750c8dc0.jpg) +(a) $\omega_{1} = 0.001$ + +![](images/2b87e4034fc186b188bb526afef803deee2ed0e1b3b8eeb1f2f1c8b48231805f.jpg) +(b) $\omega_{1} = 0.005$ + +![](images/3807269ea44d28748fca3ac19d3c6b1710880c2b0c037cc78499dc9b62d56c0e.jpg) +(c) $\omega_{1} = 0.01$ +Figure 6. Empirical FDR and power with standard error versus $\psi$ for pL-RAI, with $\rho = 0.5$ , $L = 30$ , $\mu_{\mathrm{c}} = 3$ and $\pi_1 = 0.2$ . The $\varphi_0, \varphi_1, \ldots, \varphi_5$ methods correspond to $\varphi = 0, 0.1, \ldots, 0.5$ , respectively. The parameter configuration ( $\omega_1 = 0.005$ , $\phi = 0.5$ , $\varphi = 0.5$ ) yielded the most favorable performance. + +![](images/82af10d2aad99d3be4c32e7b61e47400b015533b48d47d157a4a168969ff4ef2.jpg) +(d) $\omega_{1} = 0.05$ + +![](images/5e587b3564f00a0a15226c052029c986375d0dd2a88b42d992edda626a5db181.jpg) +(a) $\omega_{1} = 0.001$ + +![](images/a189257d6e1a55dce317ffb80f0159b75b5e42e18ca9d289753d319c4f03bb98.jpg) +(b) $\omega_{1} = 0.005$ + +![](images/406bf8a68f292ace37b9c6e838ee1677c3f9bb5061428d07003018a16198e34e.jpg) +(c) $\omega_{1} = 0.01$ + +![](images/99d13aa714d8e556b26d23b7931ecb03ea48301380c6374bd0080380d01459e2.jpg) +(d) $\omega_{1} = 0.05$ +Figure 7. Empirical FDR and power with standard error versus $\psi$ for pS-RAI, with $\lambda = 0.1$ , $\rho = 0.5$ , $L = 30$ , $\mu_{\mathrm{c}} = 3$ and $\pi_1 = 0.2$ . The $\varphi_0, \varphi_1, \ldots, \varphi_5$ methods correspond to $\varphi = 0, 0.1, \ldots, 0.5$ , respectively. The parameter configuration ( $\omega_1 = 0.005$ , $\phi = 0.5$ , $\varphi = 0.5$ ) yielded the most favorable performance. + +![](images/b93cd93674240b1d3cbd84d13f2ba6323aa5fb9b89c190ace490dcf0cccb00d4.jpg) +(a) e-SAFFRON + +![](images/1371dc5e3b1fde63837f652fbe6ec01d00913b432ec43712c47056055941bd64.jpg) +(b) pS-RAI +Figure 8. Empirical FDR and power with standard error versus proportion of alternative hypotheses $\pi_1$ for e-SAFFRON and pS-RAI, with $\rho = 0.5$ and $\mu_{\mathrm{c}} = 3$ . The $\lambda_1, \lambda_2, \dots, \lambda_5$ methods correspond to $\lambda = 0.1, 0.2, \dots, 0.5$ , respectively. The parameter $\lambda = 0.1$ yielded the most favorable performance. + +![](images/730c60fc2a5967ddd10e3faf238815c5851d4dd723f735862d04f3cbf8975ba2.jpg) +(a) $\rho = 0$ + +![](images/fbaf86b5a3276ebcdc122df0e88a66e6d59196e17c56d9b6debdecefb33d68a4.jpg) + +![](images/ec530cc31268138a689bebfb276d5b2e64d171e810a17af470c1f9f72a80071e.jpg) +(b) $\rho = 0$ + +![](images/b5d5e9ac4142a85d27a06f90b42e7b805ed5e9e127990ae13f61c93702c7459b.jpg) + +![](images/c5ffb932a9a6402fd7a6bf73341da7009c35fd1ab5b8c7cb535ce5ea5326e91e.jpg) +(c) $\rho = 0.1$ + +![](images/a0a8f5fa4f2e04d7e56f8a4202578293b7d16741fae5df8fc6fc25b7213878e4.jpg) + +![](images/40f40e0bac79e29f6ac4db27652f099a96fbb60b7d21608d04b8c32ea19bbbab.jpg) +(d) $\rho = 0.1$ + +![](images/982fabf266c61d0106a3db045205e526cd8fa4f2765592ad4c842447d1481771.jpg) + +![](images/426c8ac4341d0a350ee480a2668474a379c7415f4ad52c6eaad16799d4350706.jpg) +(e) $\rho = 0.3$ + +![](images/0647cd06ce0d2ae0e80c0d3c6ad031c5b4da643b823ff3781b78f6a6e3fa431e.jpg) +Figure 9. Empirical FDR and power with standard error versus proportion of alternative hypotheses $\pi_1$ for e-LORD, e-SAFFRON, e-LOND, pL-RAI, pS-RAI, LORD++, SAFFRON and SupLORD, with varying $\rho$ and fixed $\mu_{\mathrm{c}} = 3$ . Our pL-RAI and pS-RAI consistently outperform LORD++ and SupLORD, whereas SAFFRON fails to control FDR when $\rho$ is large and $\pi_1$ is small. + +![](images/e9f19ccc82f1143a6c492c062577116b783206b86aa9d30a721a37765adbf9dc.jpg) +(f) $\rho = 0.3$ + +![](images/9ff33b48056ffa93d348e4d27f8f275359bdde75b7988f30959954c0bad76d7b.jpg) + +with e-LOND, LORD++, SAFFRON and SupLORD. + +Figure 9 show the results of various methods with varying $\rho$ when $\mu_{\mathrm{c}} = 3$ . The performance of each method is similar to that in the main body. For e-value-based methods, irrespective of the correlation parameter $\rho$ , e-LORD, e-SAFFRON, and e-LOND successfully achieve FDR control. Moreover, e-LORD and e-SAFFRON exhibit substantially higher statistical power than e-LOND across all settings by dynamically updating the testing levels, leading to more effective discoveries. For p-value-based methods, as shown in Figure 9, SAFFRON exhibits FDR inflation under such conditions. Besides, our pL-RAI and pS-RAI always gain higher power than $\mathrm{LORD}++$ and SupLORD. In the independence case ( $\rho = 0$ ), each method successfully maintains FDR control empirically, as guaranteed by their theoretical results. + +# C.2. Simulation: mem-FDR control + +We consider a new experimental setup where the samples follow a time-varying auto-regressive AR(1) model. For each time $t$ , $X_{t} = \rho_{t}X_{t - 1} + \mu_{t} + \varepsilon_{t}$ with $\varepsilon_t\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,1)$ , where the auto-regressive coefficient $\rho_{t} = \frac{2}{1 + \exp(-\eta(t - t_{0}))} -1\in (-1,1)$ . We set $\eta = 0.01$ and $t_0 = T / 2$ . We aim to test whether there is a positive drift and the null hypothesis takes $\mathbb{H}_t:\mu_t = 0$ for each time $t\in [T]$ . The true labels $\theta_{t}$ is generated from Bernoulli $(\pi_1)$ and the positive drift $\mu_t = \mu_c > 0$ if $\theta_t = 1$ . Note that the correlation coefficient $\rho_{t}$ between adjacent samples varies over time, resulting in a complex correlation structure. + +We compare the performance of mem-e-LORD, mem-e-SAFFRON, mem-pL-RAI and mem-pS-RAI with mem-LORD++ + +![](images/d0dade3094d567cccdaad501b3713cc067d55bb582a48853c5c67858161a550d.jpg) +(a) $T = 10000,\mu_{c} = 3$ + +![](images/f74caa3e644e51080d56b828091792200e3cfe6ee48d901ad4ddd6eba890c44d.jpg) +(b) $T = 20000,\mu_{c} = 3$ + +![](images/c1611fa18a4eb248b3d673e46d1ec05d0b1ea24eec1df056ee49145d8bff2876.jpg) +(c) $T = 10000,\mu_{c} = 4$ + +![](images/8d1092a60841eaa463d8407bb1a0a27d38548a02b3985d421ebccc153a279b79.jpg) +(d) $T = 20000,\mu_{c} = 4$ +Figure 10. Empirical mem-FDR and mem-Power with standard error versus proportion of alternative hypotheses $\pi_1$ for mem-e-LORD, mem-e-SAFFRON, mem-LORD++, mem-pL-RAI and mem-pS-RAI with $\mu_c = 3,4$ , at $T = 10000$ , 20000. All procedures maintain FDR control. Our e-GAI and RAI methods exhibit superior mem-power than mem-LORD++ when $\mu_c$ or $\pi_1$ is small. + +at $T \in \{10000, 20000\}$ with different proportions of alternative hypotheses $\pi_1$ and strengths of signals $\mu_c$ . We utilize the AR(1) model and the normal distribution of noise to calculate e-values and p-values that satisfy (2) and (1), respectively. + +As shown in Figure 10, all methods successfully control the mem-FDR. Our procedures perform well over long-term testing periods, especially with sparse alternatives. In such cases, these algorithms achieve higher mem-Power than mem-LORD++. The mem-Power refers to the decaying memory power in (Ramdas et al., 2017), defined as + +$$ +\operatorname {m e m - P o w e r} (t) := \mathbb {E} \left[ \frac {\sum_ {j \notin \mathcal {H} _ {0} (t)} d ^ {t - j} \delta_ {j}}{\sum_ {j \notin \mathcal {H} _ {0} (t)} d ^ {t - j}} \right]. +$$ + +# C.3. More results for Real Data: NYC Taxi Anomaly Detection + +We adjust $\alpha = 0.2$ and apply pL-RAI, pS-RAI, SAFFRON, LORD++, and SupLORD here to analyze this dataset. We construct two-sided Gaussian p-values by estimating the mean and variance from the residuals. + +We compare their performance in terms of the proportion of discoveries out of marked anomalous regions, denoted here as $\widehat{\mathrm{FDP}}$ and the number of discovered anomalous regions in Table 5. We observe that the FDP of LORD++ and SAFFRON far exceeds the testing level $\alpha = 0.2$ and SupLORD slightly exceeds $\alpha$ . Meanwhile, pL-RAI and pS-RAI effectively maintain FDP below the target level. As illustrated in Figure 11, pL-RAI and pS-RAI identify many points within the anomalous regions. + +Table 5. Proportion of points rejected out of anomalous regions. + +
MethodpL-RAIpS-RAILORD++SAFFRONSupLORD
FDP0.1970.1950.2610.3610.217
Num Discovery201259257595406
+ +![](images/a0c2a55ce78e9e80666f6187ed714af81144aa8409f73f72adcd8da0d13895ec.jpg) + +![](images/a632948086e1d78fa9cbe165991f872e65b6117b31d3a6defde5e8f6e8509565.jpg) + +![](images/7ddc2a844953c2d552af771f7ece74b73b5af5d3c1a65ec0183a177828b0ccb4.jpg) + +![](images/a83862cec1cedbecf6525eb28da3dfcf27aa5b6d30c1417c7ef8809cb1abef09.jpg) +Figure 11. Anomaly points detected by pL-RAI, pS-RAI, LORD++, and SupLORD. Rejection points of all procedures are marked by dark blue points. Red regions refer to known anomalies. The testing level is chosen as 0.2. \ No newline at end of file diff --git "a/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/images.zip" "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/images.zip" new file mode 100644 index 0000000000000000000000000000000000000000..2b4a4ef1d3c095061cd2ba8ee9d0e8729cc53c21 --- /dev/null +++ "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/images.zip" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:570ab7c5c51a79aa1ebdcc032910628a2fa9ba7048d3fd31fa2294599ec0b744 +size 1493033 diff --git "a/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/layout.json" "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/layout.json" new file mode 100644 index 0000000000000000000000000000000000000000..777c9c9cae66bf3d8b81862da87d937a1acbda15 --- /dev/null +++ "b/ICML/2025/e-GAI_ e-value-based Generalized $\316\261$-Investing for Online False Discovery Rate Control/layout.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6065aa35753b85385d873638bf8da6100aafcac12c576eb9d7b1e2f6394328ae +size 1336155 diff --git a/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_content_list.json b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1b3a0dcab54603490bc42de9869de78bf743143e --- /dev/null +++ b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1a4b973b0a2857a5e0359247ee0e1e8d3c47c8af0ae7067a941fa57cfe2c408 +size 138259 diff --git a/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_model.json b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e10fbc19876e038592ea9f5175663c5a3de83b4c --- /dev/null +++ b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:673d81d7cca540e3250bf61e118e91ffd7aadbb95e2ca697c6b6238c2731f3e5 +size 163443 diff --git a/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_origin.pdf b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e652f2fa5457a79162091e4b2a8e86ac8b4f44a3 --- /dev/null +++ b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/65055bae-2509-400e-8ce0-3845df4dd036_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e67e970cd5466b4aef11eb6d5f543a2244c310dd345e99a1b6a1eab715cbbd8 +size 3681202 diff --git a/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/full.md b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c1cfce8ddc1700de1781731cf5368265e215ddd4 --- /dev/null +++ b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/full.md @@ -0,0 +1,499 @@ +# iDPA: Instance Decoupled Prompt Attention for Incremental Medical Object Detection + +Huahui Yi1 Wei Xu2 Ziyuan Qin3 Xi Chen45 Xiaohu Wu6 Kang Li17† Qicheng Lao6† + +# Abstract + +Existing prompt-based approaches have demonstrated impressive performance in continual learning, leveraging pre-trained large-scale models for classification tasks; however, the tight coupling between foreground-background information and the coupled attention between prompts and image-text tokens present significant challenges in incremental medical object detection tasks, due to the conceptual gap between medical and natural domains. To overcome these challenges, we introduce the iDPA framework, which comprises two main components: 1) Instance-level Prompt Generation (IPG), which decouples fine-grained instance-level knowledge from images and generates prompts that focus on dense predictions, and 2) Decoupled Prompt Attention (DPA), which decouples the original prompt attention, enabling a more direct and efficient transfer of prompt information while reducing memory usage and mitigating catastrophic forgetting. We collect 13 clinical, cross-modal, multi-organ, and multi-category datasets, referred to as ODinM-13, and experiments demonstrate that iDPA outperforms existing SOTA methods, with FAP improvements of $5.44\%$ , $4.83\%$ , $12.88\%$ , and $4.59\%$ in full data, 1-shot, 10-shot, and 50-shot settings, respectively. Code is available at https://github.com/HarveyYi/iDPA.git. + +1West China Biomedical Big Data Center, West China Hospital, Sichuan University 2School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China 3Case Western Reserve University 4Sports Medicine Center, Department of Orthopedics and Orthopedic Research Institute, West China Hospital, West China School of Medicine, Sichuan University 5Department of Orthopedics and Orthopedic Research Institute, West China Hospital, Sichuan University 6Beijing University of Posts and Telecommunications 7Sichuan University Pittsburgh Institute. Correspondence to: Kang Li , Qicheng Lao . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +# 1. Introduction + +![](images/7b2276b59705de2d2d8c3d7b795443938eb88ceae949e2a8c32e89da653ff614.jpg) +Figure 1: Comparison of our method (iDPA) with prior methods, highlighting improved object localization and recognition through instance-level prompt generation and decoupled prompt attention for medical detection tasks. + +Vision Language Object Detection (VLOD) (Li et al., 2022; Liu et al., 2023a; Cheng et al., 2024), a paradigm that enables recognition of novel categories and scenes, has advanced object detection (OD) by integrating a language branch and leveraging large-scale image-text datasets. While these models demonstrate strong zero-shot capabilities in general domains, they struggle in the medical domain due to the degradation of medical object localization and recognition. However, neither fine-tuning separate OD models for each task nor jointly training a single model for all tasks is practical, as maintaining multiple models is inefficient and predefining all medical concepts is infeasible. Instead, Continual Learning (CL) (Li & Hoiem, 2017; Rebuffi et al., 2017; Wang et al., 2022c) is essential for adapting to emerging medical concepts while retaining prior knowledge. It must balance stability and plasticity to enable continuous learning and improve healthcare outcomes. + +Recently, prompt-based CL approaches (Wang et al., 2022c;b;a; Smith et al., 2023) have gained popularity for encoding knowledge into prompt sets, enabling a frozen pre-trained model to handle sequential tasks. Compared to previous methods, these approaches not only achieve remarkable performance but also offer key advantages for continual learning. By keeping the base model unchanged and tuning only the prompt vectors, training efficiency is enhanced while eliminating the need for exemplar storage, + +reducing both memory overhead and computational costs. + +However, these methods are primarily designed for classification tasks and are not well-suited for OD. Unlike classification, which relies on global information, object detection demands finer-grained instance information. In previous prompt-based CL methods, as shown at the top of Fig. 1, global prompt learning incorporates both foreground and background information, which can interfere with detection tasks. Furthermore, excessive background information can confuse category recognition, especially when task modalities are similar, leading to misclassification. Additionally, prepending prompts directly to the image and text tokens dilutes the prompt information because the length of imagetext tokens far exceeds those of prompts, coupling the two and hindering task-specific learning. + +Given the complex attention interactions between vision and language in VLOD, the prepending approach introduces further interference between vision and text prompts. Lastly, inserting prompts into pre-trained model, i.e., at the backbone level, limits the effectiveness of tuning, as fine-grained reasoning for detection occurs post-backbone, i.e., at the fusion level. + +In response to these challenges, we propose a framework called instance Decoupled Prompt Attention (iDPA) for incremental medical object detection. iDPA integrates Instance-level Prompt Generation (IPG) for generating fine-grained object knowledge and Decoupled Prompt Attention (DPA) to enhance prompt decoupling and precise knowledge injection during the multimodal fusion process, as shown at the bottom of Fig. 1. In the IPG module, instance features are decoupled from images, and using cross-attention to query the concept buried in the instance can separate and integrate knowledge across tasks. The DPA module decouples the originally coupled attention between prompts and tokens in previous methods, retaining three key components: vision-language mutual enhancement $(V\leftrightarrow T)$ , prompt-to-vision $(P_{t}\rightarrow V)$ , and prompt-to-text $(P_{v}\rightarrow T)$ knowledge injection. Additionally, instead of injecting knowledge only at the backbone level, we also innovatively apply knowledge injection during the multimodal fusion encoder. + +Consequently, iDPA incorporates three key strategies. First, by decoupling instance features from background information and incorporating fine-grained visual details into the prompt vectors, iDPA enhances object localization and recognition precision compared to randomly initialized prompts. It also effectively mitigates category confusion by focusing on the target entities and reducing spurious correlations with the background. Second, the decoupled prompt attention approach, which separates prompt vectors from token representations, accelerates knowledge injection, making it more effective for localizing and recognizing medical concepts. This also mitigates catastrophic forgetting + +by preserving the original category distribution. Finally, it strategically employs the fusion encoder as the optimal knowledge injection position, which is critical for learning new medical concepts and further enhances efficiency. + +In this paper, our contributions are summarized in threefold: + +- We propose a novel prompt-based framework iDPA to effectively address incremental medical object detection (IMOD). It decouples instance-level knowledge and efficiently injects it into VLOD models through DPA. +- To evaluate the effectiveness of our method, we compile medical data from 13 datasets covering multiple modalities and organs, forming ODinM-13 for the IMOD task. +- Extensive experiments demonstrate the effectiveness of our proposed approach, achieving state-of-the-art performance with only $1.4\%$ of the trained parameters in both full-data and few-shot settings. + +# 2. Related Work + +Vision Language Object Detection. Vision-Language Models (VLMs) enhance generalization by aligning visual and textual features through large-scale image-text learning. CLIP (Radford et al., 2021) and ALIGN (Li et al., 2021) leverage contrastive learning to associate images with text, inspiring GLIP (Li et al., 2022) to unify phrase grounding and object detection. Building on GLIP, MQ-Det (Xu et al., 2024) integrates a multimodal query encoder, while Grounding DINO (Liu et al., 2023a) employs a DETR-like (Carion et al., 2020) head for improved scalability. For MOD tasks, existing methods adapt pre-trained natural-domain VLOD models to the medical domain, such as MIU-VL (Qin et al.) with prompt engineering, Guo et al. (Guo et al., 2023) with prompt fusion. However, they struggle with generalization across tasks and domains. + +Continual Learning. Continual Learning (CL) mitigates catastrophic forgetting when learning new tasks through three primary approaches. Regularization-based methods (Li & Hoiem, 2017; Kirkpatrick et al., 2017; Aljundi et al., 2018; Ding et al., 2022; Lao et al., 2021b) constrain loss functions to retain prior knowledge while adapting to new data. Architecture-based methods (Douillard et al., 2022; Li et al., 2019; Yoon et al., 2017; Mallya & Lazebnik, 2018) assign dedicated parameters for each task to prevent interference. Rehearsal-based methods (Rolnick et al., 2019; Lopez-Paz & Ranzato, 2017; Shin et al., 2017; Rebuffi et al., 2017; Lao et al., 2021a) replay stored exemplars or generate pseudo-samples to mitigate forgetting. With the rise of large-scale pre-trained models, prompt-based continual learning, an architecture-based approach, has gained prominence. L2P (Wang et al., 2022c) first introduced a prompt pool for continual learning, with DualPrompt (Wang et al., 2022b) further partitioning knowledge into general + +and expert components. S-Prompt (Wang et al., 2022a) enables domain-adaptive prompt learning, while CODA-Prompt (Smith et al., 2023) improves prompt selection via attention mechanisms. DIKI (Tang et al., 2025) reduces task interference with residual tuning, and NoRGa (Le et al., 2024) models prompts as a Mixture-of-Experts with adaptive weighting. Eclipse (Kim et al., 2024) enables efficient continual panoptic segmentation via visual prompt tuning, avoiding retraining and reducing forgetting. Recently, continual learning in medical has gained increasing attention for its flexibility and adaptability to downstream tasks (Yi et al., 2023; Ye et al., 2024), offering a more practical fit for clinical use than all-in-one models like Medical SAM (Zhu et al., 2024; Ma et al., 2024). + +Continual Object Detection. Continual Object Detection (COD) extends object detection to new categories while retaining prior knowledge. ILOD (Shmelkov et al., 2017) first introduced COD, followed by CL-DETR (Liu et al., 2023b), which improved incremental detection with distillation and memory. ZiRa (Deng et al., 2024) was the first to adapt pre-trained VLOD models for COD, mitigating forgetting through regularization and reparameterization. However, most COD research focuses on natural domains, leaving its effectiveness in data-scarce medical applications uncertain, making this an open challenge. + +# 3. Preliminary + +# 3.1. Task Definition + +Incremental Medical Object Detection (IMOD) involves incrementally detecting and localizing medical objects (e.g., lesions, tumors, organs) in medical imaging data (e.g., CT, MRI, X-ray, PET scans) over time. The task requires sequential learning from multiple tasks $\left[\mathcal{T}_1,\mathcal{T}_2,\dots ,\mathcal{T}_N\right]$ , where each task $\mathcal{T}_i$ consists of a dataset $\mathcal{D}_i = \{x_i^j,y_i^j\}_{j = 1}^{N_i}$ , with $x_{i}^{j}$ representing images and $y_{i}^{j}$ including bounding boxes and class labels. Each task also includes a class name set $C_i = \{c_i^j\}_{j = 1}^{N_{C_i}}$ , linking label indices to category names used by the text encoder of VLOD models. The main challenge in IMOD, particularly in class-incremental learning, is to adapt to new object classes introduced in each task without forgetting previously learned ones, allowing the model to handle an expanding range of medical objects while maintaining detection accuracy across all learned classes. This work is developed based on pre-trained VLOD models (such as GLIP (Li et al., 2022)). When training task $t$ , the task's classes encompass the current task's classes along with the previous tasks' classes. + +# 3.2. Vision Language Object Detection + +To better achieve IMOD, this work builds upon pre-trained VLOD models in natural domains, providing a strong foun + +dation for improving generalization and robustness in data-scarce scenarios, making them highly suitable for practical medical settings. Unlike traditional object detectors, VLOD models replace the classification head with a textual encoder, such as BERT (Devlin, 2018), and introduce a cross-modality fusion encoder that enhances the model's ability to detect medical objects across different imaging modalities. VLOD models for object detection consist of four key components: 1) Visual Encoder $\Phi_v$ , 2) Textual Encoder $\Phi_t$ , 3) Cross-Modality Fusion Encoder $\Phi_f$ , and 4) Localization Head $\Phi_{\mathrm{loc}}$ . + +$$ +f _ {v} = \Phi_ {\mathbf {v}} (\operatorname {I m g}), \quad f _ {t} = \Phi_ {\mathbf {t}} (\text {T e x t}), \tag {1} +$$ + +$$ +f _ {v} ^ {\prime}, f _ {t} ^ {\prime} = \Phi_ {\mathrm {f}} \left(f _ {v}, f _ {t}\right), \tag {2} +$$ + +$$ +p _ {\mathrm {l o c}} = \Phi_ {\mathrm {l o c}} \left(f _ {v} ^ {\prime}\right), \quad p _ {\mathrm {c l s}} = f _ {v} ^ {\prime} \cdot \left(f _ {t} ^ {\prime}\right) ^ {\mathrm {T}}. \tag {3} +$$ + +The workflow is as follows: First, visual and textual features are extracted using $\Phi_v$ and $\Phi_t$ (Eq. (1)). These features are then fused through the cross-modality fusion encoder $\Phi_f$ (Eq. (2)). Finally, localization $p_{\mathrm{loc}}$ and classification $p_{\mathrm{cls}}$ predictions are generated (Eq. (3)), where $\cdot$ denotes matrix multiplication. The fusion of these features enhances object localization and recognition accuracy. + +# 3.3. Prompt-Based Continual Learning + +Prompt-based CL, a type of CL method based on prompting pre-trained models, incrementally learns and stores a lightweight, learnable parameter (known as a prompt) for each task, gradually building a "prompt pool" $P = \{p_1, p_2, \dots, p_N\}$ , where $p_i \in \mathbb{R}^{l \times d}$ . Here, $N$ represents the number of tasks, $l$ is the prompt length, and $d$ is the feature embedding dimension. At inference time, a selected prompt from the prompt pool is appended to the frozen pre-trained model to restore learned knowledge. Given the feature embeddings $f_e \in \mathbb{R}^{L \times d}$ for a transformer layer, the input is formed by concatenating the selected prompt $p_s \in \mathbb{R}^{l \times d}$ with $f_e$ as follows: + +$$ +\operatorname {T r a n s f o r m e r} \left(\left[ p _ {s}; f _ {e} \right]\right), \quad \text {w h e r e} \left[ p _ {s}; f _ {e} \right] \in \mathbb {R} ^ {(l + L) \times d}, \tag {4} +$$ + +where $p_s$ is the selected prompt embedding. The prompt selection process relies on query-key matching, with feature centroids $\mathbb{K} = \{K_i\}_{i=1}^N$ learned during training via cosine similarity or clustering. For a test sample $x$ , the most relevant centroid $K_s$ is identified by: + +$$ +K _ {s} = \arg \max _ {K _ {i} \sim \mathbb {K}} \left\langle \Phi_ {\mathbf {v}} (x), K _ {i} \right\rangle . \tag {5} +$$ + +Currently, prompt-based CL methods focus on global, mixed knowledge, which suffices for classification tasks. However, for IMOD, fine-grained knowledge is crucial for precise localization and understanding of medical objects. Therefore, adapting prompt-based CL for IMOD requires focusing on learning and preserving specific, fine-grained knowledge to address the unique challenges of IMOD. + +![](images/f6cbba02b62858ab18a83ed4e6c574024b5cb524698126080511adfbc36309be.jpg) + +![](images/2f1b38fb5a8c2840bd02d7a51efd4dcfaf6127b0b442a175b83066ec58842e29.jpg) + +![](images/81ccd66d5522fedb1f7763e6c264acbdd6511822f88f5d230c73db4179f69cc1.jpg) + +![](images/ca077dc581a408111ab12d884587578a194abb0c5ac0c5c62ca945159f699d20.jpg) +Figure 2: Overview of iDPA. Based on a frozen pre-trained VLOD model with visual-language interaction modules (e.g., GLIP (Li et al., 2022)), iDPA integrates Instance-level Prompt Generation (IPG) and Decoupled Prompt Attention (DPA) to enhance object localization and recognition, optimizing knowledge transfer for medical detection tasks. + +# 4. Methodology + +# 4.1. Overview of iDPA + +To effectively achieve the IMOD goal, our core idea is to decouple fine-grained instance-level knowledge, generate enriched concept prompts, and optimize the Prompt Attention (PA) mechanism by retaining key attention components, enabling efficient knowledge injection into the pre-trained model while mitigating class forgetting issues. This reliable, robust approach helps the model focus on essential localization and recognition for clinical medical object detection. Thus, we propose iDPA, an efficient, scalable IMOD framework, as shown in Fig. 2. + +To realize this design, iDPA integrates Instance-level Prompt Generation (IPG, Sec. 4.2) and Decoupled Prompt Attention (DPA, Sec. 4.3) to enhance robust, scalable incremental learning. First, IPG extracts fine-grained, adaptive instance-level features from the training set and generates rich, diverse, stable instance-specific prompt knowledge. This contextual knowledge is then injected into the frozen, pretrained model through DPA, enabling the model to retain focus on reliable, critical, task-specific details while effectively mitigating interference from previous tasks. Through this streamlined process, iDPA facilitates efficient, precise and seamless fine-grained knowledge transfer, which is essential for accurate IMOD performance. + +# 4.2. Instance-Level Prompt Generation + +Inspired by prior research (Wu et al., 2023; Xu et al., 2024), we first use a pre-trained model to extract image features from the training set, then focus on target regions via bounding boxes. To refine these features, we apply cross-attention to disentangle and clarify different concepts, as demonstrated in (Alayrac et al., 2022; Xu et al., 2024). + +Decoupling Instance Features for Prompt Construction. For each task $\mathcal{T}_i$ , prior to model training, we decouple the instance-level representations $\mathcal{L}_i$ for each of the $|C_i|$ categories from the training data as follows: + +$$ +v _ {c} ^ {(j)} = \operatorname {R o I P o o l} (\Phi (\operatorname {I m g}, \text {T e x t}), \gamma b), \quad j = 1, 2, \dots , M, \tag {6} +$$ + +$$ +\mathcal {I} _ {i} = \left\{\mathbf {v} _ {c} \mid \mathbf {v} _ {c} = \left\{v _ {c} ^ {(j)} \right\} _ {j = 1} ^ {M} \right\} _ {c = 1} ^ {| C _ {s} |}, \tag {7} +$$ + +where each $\mathbf{v}_c\in \mathbb{R}^M$ consists of $M$ instance-level representations of each category, which are encoded by the image encoder $\Phi_{\mathbf{v}}$ or the fusion encoder $\Phi_{\mathbf{f}}$ . These representations are extracted before the attention layers, which correspond to the attention layers where prompt learning is applied. Specifically, given an instance from the $c$ -th category with bbox $b\in \mathbb{R}^4$ in an image, an RoI pooler (Ren, 2015) is used to extract the corresponding region feature $v_{c}^{(j)}\in \mathbb{R}^{d}$ . The scaling factor $\gamma = 1.3^2$ increases the region size to capture additional contextual information. During training, for each class, we query the $M$ instance representations to decouple the $l$ concepts (note that $l$ is the prompt length + +defined in Eq. (4)) contained within them, which are then used to form the prompt for the current task. In practice, we set $M = 1000$ for full-data settings to ensure diverse concept coverage. For few-shot learning, we set $M = m$ , where $m$ is the number of available shots per class. + +Continual Concept Perception and Knowledge Integration (CCPKI). Given the instance-level representations $\mathcal{I}_i = \{\mathbf{v}_c\mid |\mathbf{v}_c| = K\}_{c = 1}^{|C_i|}$ , extracted from the training data of the current task for each category, the CCPKI module decouples the concepts from these instances using a Query-Answer framework. Specifically, the generation of the $i$ -th prompt $\hat{p}_i$ can be expressed as follows: + +$$ +\dot {p _ {i}} = \operatorname {s o f t m a x} \left(\frac {p _ {i} \left(\mathcal {W} _ {k} \mathcal {I} _ {i}\right)}{\sqrt {d}}\right) \left(\mathcal {W} _ {v} \mathcal {I} _ {i}\right); \ddot {p _ {i}} = p _ {i} + \alpha \cdot \sigma (\tau \cdot \dot {p _ {i}}). \tag {8} +$$ + +To generate task-specific prompts from instance features, we adopt the following attention-based formulation. Here, $p_i \in \mathbb{R}^{l \times d}$ represents the initial prompt for the $i$ -th task, consisting of $l$ learnable concept components that serve as queries. The instance-level representations $\mathcal{I}_i$ are projected into key and value vectors using learnable matrices $\mathcal{W}_k$ and $\mathcal{W}_v$ , respectively. A cross-attention mechanism is then applied to extract task-relevant conceptual knowledge from the instances. Since different tasks may involve different concepts, a learnable scaling factor $\tau \in \mathbb{R}^{l \times 1}$ is used to dynamically modulate the concept weights, followed by a nonlinear activation function $\sigma(\cdot)$ (e.g., tanh) to filter and enhance the meaningful components. Finally, the activated concepts are scaled by $\alpha \in \mathbb{R}^{1 \times d}$ and added to the initial prompt via a residual connection. + +To transfer knowledge effectively, it is common for previous tasks to assist in learning subsequent ones. Inspired by this, the CCPKI for the $i$ -th task is initialized with the parameters from the previous CCPKI, i.e., $\Phi_i^{CCPKI} \gets \Phi_{i-1}^{CCPKI}$ . In this way, our approach retains the generated concept knowledge $\hat{p}_i$ from the decoupled instances in the prompt pool after training, resulting in a dynamically evolving prompt pool. This design enhances scalability and flexibility, making it particularly well-suited for IMOD tasks. + +# 4.3. Decoupled Prompt Attention + +Instead of training a series of prepended prompt vectors for each task, we focus on modifying the attention mechanism to learn multimodal knowledge efficiently. Specifically, we decouple the PA mechanism into two components: the original attention and the attention with prompt knowledge injection. These components are then integrated through a residual connection. This decoupling process is referred to as the DPA mechanism. Compared to PA, DPA accelerates instance-level knowledge learning, reduces task interference, and lowers computational complexity, resulting in reduced memory usage during training. + +The following outlines the derivation from PA to DPA, using the visual-language interaction module $X$ -Attn (Li et al., 2022; Liu et al., 2025) as an example. The input multimodal tokens $f_{v} \in \mathbb{R}^{L_{v} \times d}$ (for vision) and $f_{t} \in \mathbb{R}^{L_{t} \times d}$ (for text) are fed into $X$ -Attn, where the mutual enhancement of multimodal knowledge produces updated $\widetilde{f}_{v}$ and $\widetilde{f}_{t}$ : + +$$ +\begin{array}{l} \left\{\widetilde {f} _ {t}, \widetilde {f} _ {v} \right\} = X - \operatorname {A t t n} \left(f _ {t}, f _ {v}\right) \\ = \left\{f _ {t} + \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right), f _ {v} + \operatorname {A t t n} _ {t \rightarrow v} \left(f _ {t}, f _ {v}\right)\right\} \\ = \left\{f _ {t} + \bar {f} _ {t}, f _ {v} + \bar {f} _ {v} \right\}. \tag {9} \\ \end{array} +$$ + +In Eq. (9), $\mathrm{Attn}_{v\to t}(f_v,f_t)$ represents the vision-to-text knowledge transfer, denoted as $\bar{f}_t$ , and vice versa for the text-to-vision transfer $\bar{f}_v$ . Inspired by (He et al., 2021), + +![](images/b63309358f4db90dd8280cc226973b1314a761e22f56e4df5445a4305bf242c9.jpg) + +![](images/89b2b93e85b78d1475041c502766cb6fed222c68e35f1384c8cb952efa8b0e33.jpg) +Figure 3: Comparison between Decoupled Prompt Attention (DPA) and Original Prompt Attention. + +we analyze the role of prompt tuning in multimodal fusion through formal derivation. We derive an equivalent formulation to closely examine the prompt's role in $\mathrm{Attn}_{v\rightarrow t}$ (For simplicity, we focus on $\mathrm{Attn}_{v\rightarrow t}$ , which can similarly be extended to $\mathrm{Attn}_{t\rightarrow v}$ ), providing an alternative perspective of multimodal prompt tuning: + +$$ +\begin{array}{l} [ \bar {p} _ {t}; \bar {f} _ {t} ] = \operatorname {A t t n} _ {v \rightarrow t} ([ p _ {v}; f _ {v} ], [ p _ {t}; f _ {t} ]) \\ = \left[\left(1 - \lambda \left(p _ {t}\right)\right) \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, p _ {t}\right) + \lambda \left(p _ {t}\right) \operatorname {A t t n} _ {v \rightarrow t} \left(p _ {v}, p _ {t}\right); \right. \\ (1 - \lambda (f _ {t})) \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right) + \lambda (f _ {t}) \operatorname {A t t n} _ {v \rightarrow t} \left(p _ {v}, f _ {t}\right) ], \tag {10} \\ \end{array} +$$ + +where $p_{\{v,t\}} \in \mathbb{R}^{l \times d}$ represent vision and text prompts, respectively. These prompts are prepended to the visual and textual features $f_v$ and $f_t$ before being fed into $\mathrm{Attn}_{v \to t}$ . + +By examining Eq. (10), we introduce the DPA mechanism, which decouples the knowledge transfer processes $p_v \rightarrow f_t$ and $f_v \rightarrow f_t$ in PA. This is achieved through: 1) discarding the learning of $[\bar{p}_t]$ similar to VPT-Deep (Jia et al., 2022), which can be replaced by new trainable prompts in subsequent $X$ -Attn layers to reduce computational complexity + +Table 1: Performance of various continual learning methods on the ODinM-13 benchmark under the full data setting. FAP (%): Final Average AP. CAP (%): Cumulative Average AP. FFP (%): Final Forgetting Percentage of old tasks. + +
MethodsDFUCKvasirOpticNBCODCPM-17BreastCTBX11kKidneyTLumal6ADNIMenengBreastTTN3kFAP↑CAP↑FFP↓
Zero-shot4.6012.303.203.700.101.000.001.300.000.004.404.505.503.12--
SAM2.10.061.710.008.8318.000.150.000.670.230.0012.540.080.223.27--
MedSAM-2*4.155.344.5313.8322.441.540.001.564.320.0418.562.963.455.02--
FT (Oracle)51.7476.7079.7862.6337.4554.0836.7666.7338.5249.2275.3545.3961.7256.62--
Joint (Upper)46.8272.1978.7061.7437.1853.3435.3565.7234.0347.9773.9043.8959.9054.67--
Non-Prompt-based CL
Sequential0.000.000.000.003.180.000.000.000.0032.840.000.0021.184.4015.8757.81
WiSE-FT10.4745.931.2116.237.526.590.2010.250.130.0313.1114.6513.0610.7226.1818.60
ER34.8660.0454.0755.7215.4940.9017.3348.2313.4543.9365.4922.1547.1439.9148.7319.25
ZiRa0.140.130.000.710.100.420.000.000.0026.850.000.1519.023.6616.3749.67
Prompt-based CL
L2P42.1170.1646.5857.671.6248.0926.9654.6027.3638.0525.5829.4450.2539.8846.048.24
DualPrompt44.2640.4717.7334.030.5937.035.9131.2923.1242.3015.5833.5749.6328.8942.2420.57
S-Prompt43.4863.0247.4935.528.3739.5127.9657.7925.8240.6765.0229.6148.9741.0246.708.87
CODA42.0170.6358.7841.964.5147.6226.5058.3622.4832.9965.7127.0448.3942.0849.782.80
DKTI46.9175.9154.1455.120.7447.5732.6362.0329.2143.2916.0234.3554.7742.5149.127.74
NoRGa44.6275.8758.4457.731.0751.7629.7163.4829.5344.5437.5634.1154.4644.8449.904.92
iDPA(Ours)47.0973.7666.8560.2936.5450.9832.6964.9831.1544.4257.2034.6553.0350.2854.102.48
+ +Notes: SAM2.1 denotes auto-segmentation using the SAM2.1-L model, while MedSAM-2* incorporates MedSAM-2's memory bank based on SAM2.1. + +and memory usage; and 2) re-normalizing the attention weights through the weight adjustment scalar $\lambda(f_t)$ , which aims to remove the coupling between the visual features $f_v$ and the visual prompt $p_v$ : + +$$ +\lambda \left(f _ {t}\right) = \frac {\sum_ {i} \exp \left(\frac {f _ {t} \mathcal {W} _ {q} \left(p _ {v} \mathcal {W} _ {k}\right) ^ {\mathrm {T}}}{d}\right) _ {i}}{\sum_ {i} \exp \left(\frac {f _ {t} \mathcal {W} _ {q} \left(p _ {v} \mathcal {W} _ {k}\right) ^ {\mathrm {T}}}{d}\right) _ {i} + \sum_ {j} \exp \left(\frac {f _ {t} \mathcal {W} _ {q} \left(f _ {v} \mathcal {W} _ {k}\right) ^ {\mathrm {T}}}{d}\right) _ {j}} \tag {11} +$$ + +where $\mathcal{W}_{\{q,v\}}\in \mathbb{R}^{d\times d}$ are projection layers. Our proposed DPA mechanism is shown in Fig. 3, and the process of knowledge transfer and fusion for the enhanced text features $\bar{f}_t$ (note that $\bar{p}_t$ is omitted) is then updated to as follows: + +$$ +\begin{array}{l} \bar {f} _ {t} = \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right) + \frac {\lambda \left(f _ {t}\right)}{1 - \lambda \left(f _ {t}\right)} \operatorname {A t t n} _ {v \rightarrow t} \left(p _ {v}, f _ {t}\right), (12) \\ = \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right) + \lambda \operatorname {A t t n} _ {v \rightarrow t} \left(p _ {v}, f _ {t}\right), (13) \\ \end{array} +$$ + +where $\lambda \in \mathbb{R}^{1\times d}$ is a learnable parameter, initialized to 0, ensuring that the additional information in $p_v\rightarrow f_t$ does not affect the original branch $f_{v}\to f_{t}$ before training on downstream datasets. For $\lambda (f_t)$ , we argue that in object detection tasks, $f_{v}$ is typically much larger than $p_v$ (e.g., $f_{v}$ may exceed 10,000 tokens, while $p_v$ is typically set to 10), causing the prompt information overshadowed by the input features, reducing learning efficiency. This imbalance in the original PA hinders the model's effective use of the prompts, impeding knowledge transfer and task performance. Compared to PA, DPA reduces the interference between prompt information and the pretrained model by lowering their coupling. Specifically, the normalized attention weight $1 - \lambda (f_t)$ before $\mathrm{Attn}_{v\to t}(f_v,f_t)$ is removed, preserving the full pre-trained model information. + +The weight $\lambda(f_t)$ before $\mathrm{Attn}_{v \to t}(p_v, f_t)$ is replaced with a learnable scaling factor, providing greater flexibility in learning prompt knowledge and enabling the model to capture more precise and richer downstream information. Additionally, the decoupling process also reduces the computational complexity and memory usage of attention, as shown in 5.4. + +# 5. Experiments + +# 5.1. Experimental Setup + +Benchmark. To ensure a comprehensive evaluation, we collected 13 MOD tasks (Jha et al., 2020; Boccardi et al., 2015; Cassidy et al., 2021; Liu et al., 2020; Gong et al., 2021; Setio et al., 2017; Vu et al., 2019) from publicly available datasets for IMOD, named ODinM-13. This benchmark evaluates model performance in real medical scenarios, covering 8 imaging modalities across 9 organs. Each task is assessed in both full and few-shot settings ( $k = 1$ , 10, 50), ensuring each class has at least $k$ objects. To further evaluate generalizability, we supplement ODinM-13 with a domain-incremental benchmark constructed from four polyp datasets (Ji et al., 2022; Jha et al., 2020; Bernal et al., 2015; Ngoc Lan et al., 2021) across different medical centers. For more details, please see the appendix. + +Evaluation Metric. To evaluate the model's continual learning and forgetting mitigation, we use Final Average AP (FAP) for final performance, Cumulative Average AP (CAP) for overall performance, and Final Forgetting Percentage (FFP) to measure resistance to forgetting old tasks. The prediction performance for task $\hat{i}$ after learning task $i$ is denoted as $\mathrm{AP}_{i,\hat{i}}$ , where Average Precision (AP) is the standard + +Table 2: Overall performance of various continual learning methods on the ODinM-13 benchmark under few-shot data settings. + +
Method1-shot10-shot50-shot
FAP↑CAP↑FFP↓FAP↑CAP↑FFP↓FAP↑CAP↑FFP↓
Joint (Upper)20.03--36.68--46.07--
Sequential1.2411.4923.431.6613.6736.513.3814.9546.86
WiSE-FT6.1614.629.259.4721.0312.9810.2423.3416.03
ZiRa6.9813.5911.6810.9016.1915.123.4915.7837.28
L2P3.257.183.145.169.634.4827.9135.205.66
DualPrompt7.3613.307.935.9513.9911.3420.4434.9416.91
S-Prompt3.348.905.668.3912.984.3622.7630.867.92
CODA6.8913.964.574.4114.2012.6031.7540.864.37
DIKI5.6713.093.886.4615.349.7834.0642.275.85
NoRGa6.0211.095.175.3611.908.1432.0939.125.70
iDPA (Ours)12.1918.033.5823.7829.684.7538.6545.033.93
+ +Table 3: Comparison on four polyp datasets under the continual domain setting. + +
MethodsSunKvasirBKAIClinicDBFAP ↑CAP ↑FFP ↓
L2P59.2270.0173.1669.2467.9168.690.35
DualPrompt62.6471.7675.4372.6370.6269.771.53
iDPA (ours)66.1074.3378.7777.9374.2870.92-0.03
+ +COCO metric (Lin et al., 2014). The average AP across all tasks after learning task $i$ is given by $\mathrm{AP}_i = \frac{1}{i}\sum_{\hat{i} = 1}^i\mathrm{AP}_{i,\hat{i}}$ . After completing all $N$ tasks, the final AP for each task is denoted as $\mathrm{AP}_{N,\cdot}$ , and the final average performance is computed as $\mathrm{FAP} = \mathrm{AP}_N$ , which serves as the primary metric for CL performance. To assess historical performance, we calculate $\mathrm{CAP} = \frac{1}{N}\sum_{i = 1}^{N}\mathrm{AP}_i$ , and the Final Forgetting Percentage as $\mathrm{FFP} = \frac{1}{N - 1}\sum_{i = 1}^{N - 1}(\mathrm{AP}_{i,i} - \mathrm{AP}_{N,i})$ , extending the Forgetting Percentage Point (FPP) introduced by CL-DETR (Liu et al., 2023b). + +Comparison Methods. We compare our iDPA with both non-prompt-based and prompt-based CL methods. For non-prompt-based CL, we select Sequential, ER (Rollnick et al., 2019), WiSE-FT (Wortsman et al., 2022), and ZiRa (Deng et al., 2024). For prompt-based CL, we include L2P (Wang et al., 2022c), DualPrompt (Wang et al., 2022b), CODA (Smith et al., 2023), S-Prompt (Wang et al., 2022a), DIKI (Tang et al., 2025), and NoRGa (Le et al., 2024), which follow a similar task-specific parameter training approach to our iDPA. Note that L2P, DualPrompt, CODA, and NoRGa are originally designed for vision tasks; we extend them to multimodal tasks for fair comparison. For the all-in-one style foundation model, we compared Zero-shot GLIP, SAM2.1 (Ravi et al., 2024), and MedSAM-2 (Zhu et al., 2024). More details on reproduction can be found in the appendix. + +Implementation Details. We use the GLIP (Liu et al., 2023a) model with Swin-T (Liu et al., 2021), pre-trained + +![](images/1ae1377575c33cc2caf13e3983132c7758977eafbc8482405160a5ef9fd02370.jpg) + +![](images/6e859cb297997b7e85445e754add81a161a58ccba76abe026093786f9a0549d9.jpg) + +![](images/c403a541f26a86a318d696b42eab09230ca2b09e40ce05612751697b8ed65721.jpg) +Figure 4: Performance variation of different CL methods in full-data and few-shot settings. + +![](images/4688f9595712a02d67da6fd0c3b0f95c8ae4659c91457588f4900d852cad9cbd.jpg) + +on Object365 (Shao et al., 2019), GoldG (Liu et al., 2023a), and Cap4M (Liu et al., 2023a), as a robust starting point. All experiments employ AdamW (Loshchilov, 2017) with a multistep learning rate scheduler. The learning rate is set to 0.1, and weight decay is set to 1e-4. The experiments run on 4 GPUs with a batch size of 1 per GPU for 5 epochs, with a learning rate decay of 0.1 at epoch 3. All results are averaged over 3 random seeds, with the task order determined by the seed. All comparison methods are re-implemented based on their official implementations. For more details, please refer to the appendix. + +# 5.2. Main Results + +Full Data Setting. Tab. 1 presents the final performance for each task on ODinM-13 under full data training, along with the FAP, CAP, and FFP scores to evaluate the model's final performance, overall performance, and resistance to forgetting. The "Zero-shot" results represent the starting point, derived by leveraging the original GLIP weights for each task. The "FT" results indicate the model's oracle performance, which is achieved by training on a single task and testing on the corresponding task. The "Joint" results represent the model trained on the datasets of all tasks, serving as the upper bound in continual learning. + +As indicated by the bold values, iDPA achieves the best final performance on 9 out of 13 tasks compared to other methods. It outperforms the previous prompt-based SOTA method, NoRGa (Le et al., 2024), by $5.44\%$ in FAP, $4.20\%$ in CAP, and reduces FFP by $2.44\%$ . It also outperforms the previous non-prompt-based SOTA method, ER (Rolnick et al., 2019), which requires extra data (10-shot per class) for rehearsal and full model tuning, by $10.37\%$ in FAP, $5.37\%$ in CAP, and reduces FFP by $16.77\%$ . Furthermore, iDPA uses only $1.4\%$ of the trainable parameters and does not require additional data for rehearsal. Compared to ZiRa (Deng et al., + +Table 4: Ablation study of key components in iDPA. + +
MethodFAP ↑CAP ↑FFP ↓#Params↓#Memory↓
Naïve44.9949.864.311.24M7200M
IPG (w/ T)48.6552.693.713.21M7285M
DPA47.1051.043.731.37M6496M
IPG (w/o T) + DPA50.1753.892.543.22M6590M
IPG (w/ T) + DPA50.2854.102.483.22M6590M
+ +Notes: FAP (%): Final Average AP. CAP (%): Cumulative Average AP. FFP (%): Final Forgetting Percentage of old tasks. #Params: The number of trainable parameters used during training. #Memory: Memory usage during training with $1024 \times 1024$ input and batch size 1. #Time: Training time on ODinM-13using 1 GPU, batch size 1, for 5 epochs. w/ T: Indicates the presence of weight transfer between tasks. w/o T: Indicates the absence of weight transfer between tasks. + +Table 5: Comparison of iDPA at different positions. + +
ΦvΦtΦfFAP↑CAP↑FFP↓#Params↓#Memory↓#Time↓
XX47.4451.782.914.36M7622M7h40m
XX38.4444.897.536.89M7177M7h36m
X48.4153.564.3811.26M8295M9h10m
XX50.2854.102.483.22M6590M6h28m
51.5656.253.1714.48M8945M9h32m
+ +![](images/244c404a4f2c2a86d131ea68e319095c24db02ea2f70e776af1b451358fa6f38.jpg) +(a) Different context knowledge + +![](images/594d58efab3f715c3e905fc0ba3e9aa690d7a5a93d17f23289c5df1687eed801.jpg) +(b) Different scale types + +![](images/8cf68182e31a15e7a4e69bbcef58e46a98588e04dd97d1a61efcf54c63e6d50b.jpg) +(c) Different learning rate +Figure 5: Impact of context knowledge, scale types, learning rate, and prompt length on model performance. + +![](images/7374b83eb434d5fcb554d69a39877677fcdba0ffea70782225da83d2ba32d7c1.jpg) +(d) Prompt length + +Table 6: Comparison of iDPA with varying layers in ${\Phi }_{\mathbf{f}}$ . + +
LayersFAP↑FAP50↑FAP75↑#Params ↓#Memory ↓
145.5375.0746.780.54M6054M
248.1677.7250.891.07M6162M
449.9479.1253.342.15M6371M
6 (all)50.2879.4754.293.22M6590M
+ +2024), which is designed for incremental VLOD learning, iDPA surpasses it by $46.62\%$ in FAP, $37.73\%$ in CAP, and reduces FFP by $47.19\%$ . This is due to the substantial gap between the medical and natural domains, along with the large differences across modalities, organs, and categories, which makes it difficult for ZiRa to regularize and reconfigure parameters to learn a shared space. Additionally, compared to the upper bound, iDPA is only $4.39\%$ lower in FAP. These results demonstrate the effectiveness of using a prompt-based CL method for the comprehensive MOD task. Our method maintains excellent performance while ensuring data independence, parameter efficiency, model resistance to forgetting, scalability, and flexibility. + +Few-Shot Data Setting. To simulate the challenging real-world scenario of limited data annotation in clinical settings, we also conduct experiments in the few-shot setting on ODinM-13, as shown in Tab. 2. In the 1-shot setting, our iDPA outperforms the best alternative by $4.38\%$ in FAP, and $4.07\%$ in CAP, with only a $0.44\%$ increase in FFP. In the 10-shot setting, iDPA outperforms the best result by $12.88\%$ in FAP, $8.65\%$ in CAP, and exhibits a minimal $0.39\%$ increase in FFP. In the 50-shot setting, iDPA outperforms the best result by $4.59\%$ in FAP, $2.76\%$ in CAP, and reduces FFP by $0.44\%$ . These results highlight the strong knowledge + +Table 7: Impact of scaling factor $\gamma$ in Eq. 7 on performance. + +
γFAP ↑AP↑FFP ↓
1.0050.0754.012.57
1.3050.2854.102.48
1.5049.9953.402.77
+ +Table 8: Comparison between the Naïve baseline and iDPA on $(\Phi_v + \Phi_t)$ . + +
(Φv + Φt)FAP ↑CAP ↑FFP ↓
Naïve41.7247.308.57
iDPA (ours)48.4153.564.38
Δ6.696.264.19
+ +transfer capability of our approach, which, by decoupling instance-level knowledge and leveraging DPA, greatly improves model performance and efficiency, particularly in data-scarce environments. + +Domain Continual Setting. iDPA achieves the best performance across all four polyp datasets under the continual domain setting, outperforming L2P and DualPrompt in FAP, CAP, and FFP. These results demonstrate the superior generalizability and stability of iDPA in handling domain shifts. + +Visualization. Fig. 4 shows the AP variation of different CL methods on ODinM-13. Our method outperforms existing ones throughout the incremental learning process, not just at the end. More qualitative results are provided in Fig. 7 in the appendix showing that iDPA produces more accurate bounding boxes with higher confidence for various MOD tasks + +than Zero-shot and the L2P method (Wang et al., 2022c), using enhanced knowledge transfer. + +# 5.3. Ablation Study + +We conduct ablation studies of the proposed modules in iDPA on ODinM-13 under the full data setting, as shown in Tab. 4. Compared to the Naïve prompt method, adding the IPG module improves FAP by $3.66\%$ , CAP by $2.83\%$ , and reduces FFP by $0.60\%$ . Adding the DPA module increases FAP by $2.11\%$ , CAP by $1.18\%$ , and reduces FFP by $0.58\%$ . Moreover, DPA reduces both gradient backpropagation computation and memory usage, while introducing only a minimal number of additional parameters. When both modules are combined, FAP increases by $5.29\%$ , CAP by $4.24\%$ , and FFP decreases by $1.83\%$ . These results demonstrate that decoupling instance knowledge from images effectively enhances object recognition and localization. By decoupling PA, DPA enables more efficient learning and better injection of prompt knowledge. Furthermore, since the two modules are orthogonal, combining them improves the model's ability to complete the IMOD task. Additionally, we investigate knowledge transfer across medical tasks. When IPG is not used for weight transfer, performance slightly decreases. However, despite the substantial differences between medical tasks, knowledge sharing still occurs. This is especially evident when reducing the number of training epochs, where weight transfer significantly boosts learning efficiency. For further details, please refer to the appendix. + +# 5.4. Empirical Analysis + +Impact of Knowledge Injection Position. As shown in Tab. 5, we compare different positions for prompt knowledge injection in VLOD models. The Fusion Encoder achieves the best balance between performance and cost. + +Impact of Context Knowledge. In Fig. 5a, we compare four types of context knowledge: Gaussian noise, image knowledge, instance-level knowledge from different layers (denoted as 'Instances*'), and instance-level knowledge from the corresponding layer (denoted as 'Instances'). Our experiments demonstrate that context-aware knowledge enhances the IMOD task, with the best performance achieved by instance knowledge from the corresponding layer. + +Impact of Scale $\lambda$ in DPA. We test four types of scale $\lambda$ in DPA: constant (1.0), gate mechanism, task-level ( $\lambda \in \mathbb{R}^{1 \times 1}$ ), and dim-level ( $\lambda \in \mathbb{R}^{1 \times d}$ ). As shown in Fig. 5b, dim-level $\lambda$ yields the best performance. + +Impact of Learning Rate. A grid search over the range [1e-5, 0.1] reveals that a learning rate of 0.01 provides the best performance. Results for iDPA at different learning rates are shown in Fig. 5c. + +Impact of Prompt Length. We compare prompt lengths in + +full data and few-shot settings (Fig. 5d). A prompt length of 10 offers balanced performance and is chosen as the default. + +Impact of $X$ -Attn Layer Count. We conduct experiments with different numbers of $X$ -Attn layers (1, 2, 4, and 6) and find that incorporating all layers achieves the best performance, as shown in Tab. 6. + +Impact of scaling factor $\gamma$ in Eq. 7. As shown in Tab. 7, the model achieves the best performance when the scaling factor $\gamma$ is set to 1.30, yielding the highest FAP and AP, and the lowest FFP. This suggests that moderately enlarging the RoI region helps capture more useful contextual information. + +Fairness of Naïve Baseline Comparison. As shown in Tab. 8, iDPA outperforms the Naïve baseline when prompts are injected into both $\Phi_v$ and $\Phi_t$ , achieving gains of 6.69 FAP and 6.26 CAP, and reducing FFP by 4.19, demonstrating the effectiveness of our design. + +# 6. Conclusion + +This study presents iDPA for IMOD learning without catastrophic forgetting. iDPA efficiently generates and injects instance-level knowledge, reducing computational complexity and memory usage. It decouples target instance features and employs a continual concept perception mechanism to create and integrate concept prompts into the multimodal fusion process. Additionally, iDPA refines prompt attention into three key interaction steps, focusing on continual learning for efficient knowledge injection while preserving the original knowledge. For the evaluation, we introduce a new IMOD benchmark, ODinM-13, with 13 MOD datasets. Experiments show that iDPA outperforms previous SOTA methods in both full-data and few-shot settings. Our analysis also demonstrates that our method can be more efficient and memory-friendly compared to previous CL methods. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. + +# Acknowledgments + +The authors would like to thank the GLIP team and the developers of other continual learning methods for making their code and models publicly available, which greatly facilitated this work. We also acknowledge the contributions of individuals and organizations who have shared open-source medical imaging datasets. We are grateful to the anonymous reviewers and program chairs for their valuable and constructive feedback. + +# References + +Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. +Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., and Tuytelaars, T. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European conference on computer vision (ECCV), pp. 139-154, 2018. +Bernal, J., Sánchez, F. J., Fernández-Esparrach, G., Gil, D., Rodríguez, C., and Vilarino, F. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized medical imaging and graphics, 43:99-111, 2015. +Boccardi, M., Bocchetta, M., Morency, F. C., Collins, D. L., Nishikawa, M., Ganzola, R., Grothe, M. J., Wolf, D., Redolfi, A., Pievani, M., et al. Training labels for hippocampal segmentation based on the eadc-adni harmonized hippocampal protocol. *Alzheimer's & Dementia*, 11(2):175-183, 2015. +Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. End-to-end object detection with transformers. In European conference on computer vision, pp. 213-229. Springer, 2020. +Cassidy, B., Reeves, N. D., Pappachan, J. M., Gillespie, D., O'Shea, C., Rajbhandari, S., Maiya, A. G., Frank, E., Boulton, A. J., Armstrong, D. G., et al. The dfuc 2020 dataset: Analysis towards diabetic foot ulcer detection. *touchREVIEW* in Endocrinology, 17(1):5, 2021. +Cheng, T., Song, L., Ge, Y., Liu, W., Wang, X., and Shan, Y. Yolo-world: Real-time open-vocabulary object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16901-16911, 2024. +Deng, J., Zhang, H., Ding, K., Hu, J., Zhang, X., and Wang, Y. Zero-shot generalizable incremental learning for vision-language object detection. arXiv preprint arXiv:2403.01680, 2024. +Devlin, J. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Ding, Y., Liu, L., Tian, C., Yang, J., and Ding, H. Don't stop learning: Towards continual learning for the clip model. arXiv preprint arXiv:2207.09248, 2022. +Douillard, A., Ramé, A., Couairon, G., and Cord, M. Dytox: Transformers for continual learning with dynamic token + +expansion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9285-9295, 2022. +Gong, H., Chen, G., Wang, R., Xie, X., Mao, M., Yu, Y., Chen, F., and Li, G. Multi-task learning for thyroid nodule segmentation with thyroid region prior. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 257-261. IEEE, 2021. +Guo, M., Yi, H., Qin, Z., Wang, H., Men, A., and Lao, Q. Multiple prompt fusion for zero-shot lesion detection using vision-language models. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 283-292. Springer, 2023. +He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., and Neubig, G. Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366, 2021. +Jha, D., Smedsrud, P. H., Riegler, M. A., Halvorsen, P., Lange, T. d., Johansen, D., and Johansen, H. D. Kvasir-seg: A segmented polyp dataset. In International Conference on Multimedia Modeling, pp. 451-462. Springer, 2020. +Ji, G.-P., Xiao, G., Chou, Y.-C., Fan, D.-P., Zhao, K., Chen, G., and Van Gool, L. Video polyp segmentation: A deep learning perspective. Machine Intelligence Research, 19 (6):531-549, 2022. +Jia, M., Tang, L., Chen, B.-C., Cardie, C., Belongie, S., Hariharan, B., and Lim, S.-N. Visual prompt tuning. In European Conference on Computer Vision, pp. 709-727. Springer, 2022. +Kim, B., Yu, J., and Hwang, S. J. Eclipse: Efficient continual learning in panoptic segmentation with visual prompt tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3346-3356, 2024. +Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526, 2017. +Lao, Q., Jiang, X., Havaei, M., and Bengio, Y. A two-stream continual learning system with variational domain-agnostic feature replay. IEEE Transactions on Neural Networks and Learning Systems, 33(9):4466-4478, 2021a. +Lao, Q., Mortazavi, M., Tahaei, M., Dutil, F., Fevens, T., and Havaei, M. Focl: Feature-oriented continual learning for generative models. Pattern Recognition, 120:108127, 2021b. + +Le, M., Nguyen, A., Nguyen, H., Nguyen, T., Pham, T., Van Ngo, L., and Ho, N. Mixture of experts meets prompt-based continual learning. arXiv preprint arXiv:2405.14124, 2024. +Li, J., Selvaraju, R., Gotmare, A., Joty, S., Xiong, C., and Hoi, S. C. H. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34: 9694-9705, 2021. +Li, L. H., Zhang, P., Zhang, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.-N., et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10965-10975, 2022. +Li, X., Zhou, Y., Wu, T., Socher, R., and Xiong, C. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In International conference on machine learning, pp. 3925-3934. PMLR, 2019. +Li, Z. and Hoiem, D. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017. +Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740-755. Springer, 2014. +Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023a. +Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Jiang, Q., Li, C., Yang, J., Su, H., et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European Conference on Computer Vision, pp. 38-55. Springer, 2025. +Liu, Y., Wu, Y.-H., Ban, Y., Wang, H., and Cheng, M.-M. Rethinking computer-aided tuberculosis diagnosis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2646-2655, 2020. +Liu, Y., Schiele, B., Vedaldi, A., and Rupprecht, C. Continual detection transformer for incremental object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23799-23808, 2023b. + +Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012-10022, 2021. +Lopez-Paz, D. and Ranzato, M. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. +Loshchilov, I. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. +Ma, J., He, Y., Li, F., Han, L., You, C., and Wang, B. Segment anything in medical images. Nature Communications, 15(1):654, 2024. +Mallya, A. and Lazebnik, S. Packet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7765-7773, 2018. +Ngoc Lan, P., An, N. S., Hang, D. V., Long, D. V., Trung, T. Q., Thuy, N. T., and Sang, D. V. Neounet: Towards accurate colon polyp segmentation and neoplasm detection. In Advances in visual computing: 16th international symposium, ISVC 2021, virtual event, October 4-6, 2021, proceedings, part II, pp. 15-28. Springer, 2021. +Qin, Z., Yi, H., Lao, Q., and Li, K. Medical image understanding with pretrained vision language models: A comprehensive study. In The Eleventh International Conference on Learning Representations. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021. +Ravi, N., Gabeur, V., Hu, Y.-T., Hu, R., Ryali, C., Ma, T., Khedr, H., Radle, R., Rolland, C., Gustafson, L., et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714, 2024. +Rebuffi, S.-A., Kolesnikov, A., Sperl, G., and Lampert, C. H. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 2001-2010, 2017. +Ren, S. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015. +Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., and Wayne, G. Experience replay for continual learning. Advances in neural information processing systems, 32, 2019. + +Setio, A. A. A., Traverso, A., De Bel, T., Berens, M. S., Van Den Bogaard, C., Cerello, P., Chen, H., Dou, Q., Fantacci, M. E., Geurts, B., et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge. Medical image analysis, 42:1-13, 2017. +Shao, S., Li, Z., Zhang, T., Peng, C., Yu, G., Zhang, X., Li, J., and Sun, J. Objects365: A large-scale, high-quality dataset for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 8430-8439, 2019. +Shin, H., Lee, J. K., Kim, J., and Kim, J. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017. +Shmelkov, K., Schmid, C., and Alahari, K. Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE international conference on computer vision, pp. 3400-3409, 2017. +Smith, J. S., Karlinsky, L., Gutta, V., Cascante-Bonilla, P., Kim, D., Arbelle, A., Panda, R., Feris, R., and Kira, Z. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11909-11919, 2023. +Tang, L., Tian, Z., Li, K., He, C., Zhou, H., Zhao, H., Li, X., and Jia, J. Mind the interference: Retaining pre-trained knowledge in parameter efficient continual learning of vision-language models. In European Conference on Computer Vision, pp. 346-365. Springer, 2025. +Vu, Q. D., Graham, S., Kurc, T., To, M. N. N., Shaban, M., Qaiser, T., Koohbanani, N. A., Khurram, S. A., Kalpathy-Cramer, J., Zhao, T., et al. Methods for segmentation and classification of digital microscopy tissue images. Frontiers in bioengineering and biotechnology, pp. 53, 2019. +Wang, Y., Huang, Z., and Hong, X. S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning. Advances in Neural Information Processing Systems, 35:5682-5695, 2022a. +Wang, Z., Zhang, Z., Ebrahimi, S., Sun, R., Zhang, H., Lee, C.-Y., Ren, X., Su, G., Perot, V., Dy, J., et al. Dualprompt: Complementary prompting for rehearsal-free continual learning. In European Conference on Computer Vision, pp. 631-648. Springer, 2022b. +Wang, Z., Zhang, Z., Lee, C.-Y., Zhang, H., Sun, R., Ren, X., Su, G., Perot, V., Dy, J., and Pfister, T. Learning to prompt for continual learning. In Proceedings of the + +IEEE/CVF conference on computer vision and pattern recognition, pp. 139-149, 2022c. +Wortsman, M., Ilharco, G., Kim, J. W., Li, M., Kornblith, S., Roelofs, R., Lopes, R. G., Hajishirzi, H., Farhadi, A., Namkoong, H., et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7959-7971, 2022. +Wu, S., Zhang, W., Jin, S., Liu, W., and Loy, C. C. Aligning bag of regions for open-vocabulary object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15254-15264, 2023. +Xu, Y., Zhang, M., Fu, C., Chen, P., Yang, X., Li, K., and Xu, C. Multi-modal queried object detection in the wild. Advances in Neural Information Processing Systems, 36, 2024. +Ye, Y., Xie, Y., Zhang, J., Chen, Z., Wu, Q., and Xia, Y. Continual self-supervised learning: Towards universal multi-modal medical data representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11114-11124, 2024. +Yi, H., Qin, Z., Lao, Q., Xu, W., Jiang, Z., Wang, D., Zhang, S., and Li, K. Towards general purpose medical ai: Continual learning medical foundation model. arXiv preprint arXiv:2303.06580, 2023. +Yoon, J., Yang, E., Lee, J., and Hwang, S. J. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547, 2017. +Zhu, J., Hamdi, A., Qi, Y., Jin, Y., and Wu, J. Medical sam 2: Segment medical images as video via segment anything model 2. arXiv preprint arXiv:2408.00874, 2024. + +# A. Theoretical Analysis + +This section presents a theoretical analysis demonstrating the superior efficiency of DPA over traditional prompt learning. + +# A.1. Overall Analysis + +$$ +\begin{array}{l} f _ {1} = \operatorname {C o n c a t} [ (1 - \lambda (p _ {t})) \operatorname {A t t n} _ {v \rightarrow t} (f _ {v}, p _ {t}) + \lambda (p _ {t}) \operatorname {A t t n} _ {v \rightarrow t} (p _ {v}, p _ {t}); \\ (1 - \lambda (f _ {t})) \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right) + \lambda (f _ {t}) \operatorname {A t t n} _ {v \rightarrow t} \left(p _ {v}, f _ {t}\right), \tag {14} \\ = \operatorname {C o n c a t} [ A, B ] \\ \end{array} +$$ + +where $p_{v,t} \in \mathbb{R}^{l \times d}$ represent the vision and text prompts, and $f_v, f_t$ are the visual and textual features before being fed into $\mathrm{Attn}_{v \to t}$ . + +$$ +f _ {2} = \left(1 - \lambda \left(f _ {t}\right)\right) \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right) + \lambda \left(f _ {t}\right) \operatorname {A t t n} _ {v \rightarrow t} \left(p _ {v}, f _ {t}\right) = B, \tag {15} +$$ + +If $\mathrm{Attn}_{v\to t}(\cdot ,\cdot)\in \mathbb{R}^{L\times d}$ , then the two terms $A(p_{t})$ and $A(f_{t})$ each belong to $\mathbb{R}^{L\times d}$ and $f_{1}\in \mathbb{R}^{L\times 2d}$ . In contrast, $f_{2}\in \mathbb{R}^{L\times d}$ . + +$$ +f _ {2} = \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v}, f _ {t}\right) + \lambda \left(f _ {t}\right) \Delta , \tag {16} +$$ + +where we have defined $\Delta = \mathrm{Attn}_{v\to t}(p_v,f_t) - \mathrm{Attn}_{v\to t}(f_v,f_t)$ . + +$$ +\frac {\partial f _ {2}}{\partial \theta} = \frac {\partial \operatorname {A t t n} _ {v \rightarrow t} \left(f _ {v} , f _ {t}\right)}{\partial \theta} + \lambda \left(f _ {t}\right) \frac {\partial \Delta}{\partial \theta}. \tag {17} +$$ + +This indicates that $f_{2}$ has a lower-dimensional structure, residual components, and a more direct gradient flow. + +# A.2. Computation Cost + +Lemma A.1. $f_{2}$ is computational light than $f_{1}$ + +Proof. The overall $f_1$ is $f_1 = \operatorname{Concat}[A; B]$ . However, $f_2$ uses only the $B$ branch, $f_2 = B$ . The computation cost of branch $A$ is roughly $\mathcal{O}_A = \mathcal{O}(f_v, p_t) + \mathcal{O}(p_v, p_t) = \mathcal{O}(A + B)$ for $B \mathcal{O}_{f_2} = \mathcal{O}(B)$ . $\mathcal{O}_{f_1} > \mathcal{O}_{f_2}$ . + +# A.3. convergence benefit analysis + +Lemma A.2. Let $f_{1}$ and $f_{2}$ be our models. Assume both models have converged to any local minima and there be an optimal representation $f^{*} = B$ . Suppose further output of $f_{1}$ is locally linear around the optimum, then $f_{2}$ achieves the same performance as $f_{1}$ at the local convergence. + +Proof. The loss function $\mathcal{L}(f_{\mathrm{out}})$ , For $f$ , the output $y_{1} = h(\mathrm{Concat}[A;B])$ , and $y_{2} = h(B)$ . At convergence: $\nabla \mathcal{L}(f_1) = 0$ and $\nabla \mathcal{L}(f_2) = 0$ . There exists an optimal representation $f^{*}$ such that it is sufficient to have $f^{*} = B$ . + +Assume $h$ is locally linear at the optima, then there exists a matrix $M$ such that + +$$ +h \left(\operatorname {C o n c a t} [ A; B ]\right) = M \binom {A} {B} = M _ {1} A + M _ {2} B. \tag {18} +$$ + +Under convergence, $M_1A = 0$ , thus + +$$ +h \left(\operatorname {C o n c a t} [ A; B ]\right) = M _ {2} B = h ^ {\prime} (B). \tag {19} +$$ + +$$ +y _ {1} = h \left(\operatorname {C o n c a t} [ A; B ]\right) = h ^ {\prime} (B) = y _ {2}. \tag {20} +$$ + +Thus $f_{2}$ performs as well as $f_{1}$ after convergence to local minima. + +# B. More Implementation Details + +Benchmark. We collected 13 public datasets from the internet: DFUC-2020 (DFUC) (Cassidy et al., 2021), Kvasir (Jha et al., 2020), OpticNerv (OpticN) $^{1}$ , BCCD $^{2}$ , CPM-17 (Vu et al., 2019), Breast Cancer (BreastC) $^{3}$ , TBX11K (Liu et al., 2020), Kidney Tumor (KidneyT) $^{4}$ , Luna16 (Setio et al., 2017), ADNI (Boccardi et al., 2015), Meningioma (Meneng) $^{5}$ , Breast Tumor (BreastT) $^{6}$ , and TN3K (Gong et al., 2021). Among them, OpticN, BCCD, BreastC, KidneyT, Meneng, and BreastT are from the Roboflow $^{7}$ website. These datasets include eight different modalities: Photography, Endoscopy, Light Microscopy, Histopathology, X-ray, CT, MRI, and Ultrasound, covering nine different organs: Foot, Colorectal, Nerve, Blood/Cell, Lung, Brain, Breast, Kidney, and Thyroid. The random seed used for few-shot data generation is kept consistent with the one used during training. $k$ -shot means ensuring that each class in the current dataset contains at least $k$ instances. Three different orders were used during training. The dataset order and corresponding random seed are shown in the Tab. 9. + +Table 9: Task order under different random seeds. The table shows the dataset sequences used during training for three different random seeds (0, 5, and 10). + +
Order12345678910111213
Seed 0DFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLuna16ADNIMenengBreastTTN3K
Seed 5OpticNBCCDBreastTMenengTN3KKvasirTBX11KKidneyTDFUCLuna16BreastCCPM-17ADNI
Seed 10CPM-17BreastCLuna16KvasirOpticNMenengTN3KBCCDBreastTKidneyTTBX11KDFUCADNI
+ +Implementation. The proposed method is implemented in Python using the PyTorch library and runs on a PC. The code is based on the official GLIP (Li et al., 2022) implementation $^{8}$ , and its environment requirements remain unchanged. For full-data training, we use four NVIDIA 3090 GPUs with a batch size of 4, while for few-shot training, we use a single NVIDIA 3090 GPU with a batch size of 1. Unless otherwise specified, all experiments are trained for 5 epochs, with the learning rate reduced by a factor of 0.1 after 3 epochs. For all prompt-based CL methods (Wang et al., 2022c; b; Smith et al., 2023; Wang et al., 2022a; Tang et al., 2025; Le et al., 2024), the initial learning rate is set to 1e-2, whereas ZiRa (Deng et al., 2024) uses an initial learning rate of 1e-3. Standard fine-tuning (FT), joint training, sequential training, WiSE-FT (Wortsman et al., 2022), and experience replay (ER) (Rolnick et al., 2019) use an initial learning rate of 1e-4. The learning rates are determined via grid search within the range of [1e-5, 0.1]. To ensure reproducibility, all experiments are conducted with three different random seeds (0, 5, 10), and the dataset order is adjusted accordingly. The final results are reported as the average over three runs. + +Reproduction. We reproduce other prompt-based methods on GLIP by prompting all layers of both the vision and text encoders, whereas the original papers typically use only the embedding layer or a few initial layers (e.g., the first five layers). This discrepancy may lead to suboptimal performance on the IMOD task. The vision backbone is used as the query function, and the mean feature representation from its last layer is utilized to identify the task ID. For L2P (Wang et al., 2022c), we set the prompt length to 5. During inference, the top-5 prompts are selected from the prompt pool, following the official implementation. In the original L2P paper, updated prompts are selected via a key-matching mechanism during training, with diversity maintained using a frequency-based weighting technique. However, in the official code repository, specific prompts are masked for different tasks. We follow the implementation provided in the official code. For DualPrompt (Wang et al., 2022b), we set the prompt length to 10 for both key and value prompts. Two layers are designated as G(eneral)-Prompt, while the remaining 10 layers serve as E(xpert)-Prompt. For CODA (Smith et al., 2023), we set the prompt length to 8 and + +```txt +1https://universe.roboflow.com/neurosurgery/optic-nerv +2https://public.roboflow.com/object-detection/bccd +3https://universe.roboflow.com/tez-m06pk/breast-cancer-tbwa9 +4https://universe.roboflow.com/east-delta-university-rpdgs/kidney_tumor-tke8k +5https://universe.roboflow.com/mem-g72lg/manengioma +6https://universe.roboflow.com/qidiliu/breast-tumor-detection-nsikz +7https://roboflow.com/ +8https://github.com/microsoft/GLIP +``` + +additionally add an extra key to learn the task identity, following the official implementation. For S-Prompt (Wang et al., 2022a), DIKI (Tang et al., 2025), and NoRGa (Le et al., 2024), we set the prompt length to 10. S-Prompt employs K-Means to generate 5 prototypes for task identification. All reproductions adhere to the implementation in the official code. + +# C. Additional Results + +Tab. 10 compares the impact of weight transfer between tasks when training for 3 epochs. The results show that enabling weight transfer improves continual learning performance under limited training time. Tab. 11 presents a detailed comparison of iDPA with different knowledge injection positions. The best performance is observed when knowledge is injected simultaneously into the vision, text, and fusion encoders. However, this setting leads to a higher forgetting rate compared to injecting knowledge only in the fusion encoder. Fig. 6 visualizes the performance dynamics across different knowledge injection positions. Tab. 12, Tab. 13, and Tab. 14 report the performance of iDPA compared with other continual learning methods under 1-shot, 10-shot, and 50-shot settings, respectively. Tab. 15 shows that iDPA achieves the lowest parameter count (3.34M), reduced FLOPs, and significantly lower memory consumption and training time compared to all baselines. While maintaining competitive inference speed (5.93 FPS), it offers the best overall efficiency among the evaluated continual learning methods. Tab. 16 demonstrates that iDPA consistently improves performance across 13 datasets under 1-shot, 5-shot, and 10-shot settings, with low variance indicating strong stability and generalization. Fig. 7 provides qualitative comparisons between iDPA, Ground Truth, Zero-shot, and L2P (Wang et al., 2022c), the pioneering prompt-based continual learning method. iDPA shows superior localization accuracy, more precise classification, and higher confidence scores. + +Table 10: Performance of using weight transfer over 3 epochs. FAP (%): Final Average AP. CAP (%): Cumulative Average AP. + +
TransferDFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLuna16ADNIMengengBreastTTN3kFAP↑CAP↑FFP↓
W/O46.8673.6963.6160.2529.8850.6830.5957.8830.5842.8050.8419.2351.2846.7851.873.98
W46.3471.0566.8660.3330.6549.1629.3963.7828.1639.9956.9727.1052.3647.8652.062.47
+ +Table 11: Comparison of performance with different knowledge injection positions. FAP (%): Final Average AP. CAP (%): Cumulative Average AP. FFP (%): Final Forgetting Percentage of old tasks. + +
ΦvΦtΦfDFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLumal6ADNIMenengBreastTTN3kFAP↑CAP↑FFP↓
42.2872.5366.3058.9427.0849.7330.2562.3827.5138.2356.0032.9852.4647.4451.782.91
40.9270.2429.6147.2426.0842.0920.4555.4723.5822.9256.9326.1338.1038.4444.897.53
46.5773.0559.5159.8234.6247.3532.1562.2429.6941.9858.8329.5154.0148.4153.564.38
47.0973.7666.8560.2936.5450.9832.6964.9831.1544.4257.2034.6553.0350.2854.102.48
49.4076.0466.4060.9133.1553.1236.4365.7332.1945.9458.1535.6757.1951.5656.253.17
+ +![](images/b48698ab3da1b23049f7d255b5e6056dfae6459d228edf474db61093313c86ec.jpg) +Figure 6: Performance variation of iDPA in different location. + +Table 12: Performance of various continual learning methods on the ODinM-13 benchmark under the 1-shot setting. FAP (%): Final Average AP. CAP (%): Cumulative Average AP. FFP (%): Final Forgetting Percentage of old tasks. + +
MethodsDFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLumal6ADNIMenengBreastTTN3kFAP↑CAP↑FFP↓
Joint (Upper)7.9951.3524.8833.5436.289.181.9422.434.201.5446.1812.358.4820.03--
Non-Prompt-based CL
Sequential0.000.000.000.008.700.000.000.000.060.001.870.005.541.2411.4923.43
WiSE-FT4.6326.750.329.049.750.320.086.330.030.0011.377.893.526.1614.629.25
ZiRa6.1726.872.403.6212.347.660.564.570.000.0119.424.852.236.9813.5911.68
Prompt-based CL
L2P1.562.160.0010.4612.451.140.213.690.250.007.910.431.993.257.183.14
DualPrompt6.2032.051.019.6411.000.480.003.610.230.1019.636.205.497.3613.307.93
S-Prompt2.247.943.556.688.722.350.423.250.020.005.691.001.533.348.905.66
CODA0.762.697.1924.0220.420.340.920.691.030.1314.955.7310.686.8913.964.57
DIKI0.5411.073.4416.9516.222.410.628.710.010.016.893.543.305.6713.093.88
NoRGa1.005.7313.8617.6211.240.640.567.310.090.018.508.133.576.0211.095.17
iDPA(Ours)6.6643.0414.6220.5531.135.332.157.380.300.3917.346.173.4512.1918.033.58
+ +Table 13: Performance of various continual learning methods on the ODinM-13 benchmark under the 10-shot setting. FAP (%): Final Average AP. CAP (%): Cumulative Average AP. FFP (%): Final Forgetting Percentage of old tasks. + +
MethodsDFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLuna16ADNIMenengBreastTTN3kFAP↑CAP↑FFP↓
Joint (Upper)32.6060.9956.0756.1441.0133.2910.4944.1713.3810.5567.5720.7729.7636.68--
Non-Prompt-based CL
Sequential3.840.000.000.000.360.000.000.000.008.190.001.927.331.6613.6736.51
WiSE-FT7.1341.131.4512.6816.911.910.138.980.060.0416.3110.685.699.4721.0312.98
ZiRa6.0730.711.3211.1416.389.180.537.000.440.5740.708.748.9610.9016.1915.12
Prompt-based CL
L2P2.9216.652.4617.9011.044.080.094.340.020.023.861.032.725.169.634.48
DualPrompt1.956.190.0614.487.170.910.060.760.060.0636.146.632.915.9513.9911.34
S-Prompt8.2016.339.139.1210.235.530.438.010.250.1337.671.132.958.3912.984.36
CODA5.230.231.3325.573.340.280.222.100.010.017.784.496.784.4114.2012.60
DIKI0.9719.660.2527.925.831.271.519.310.080.016.117.203.866.4615.349.78
NoRGa2.6011.192.9624.843.011.420.180.690.010.0114.496.511.805.3611.908.14
iDPA(Ours)21.3750.2029.2039.1138.3319.656.0327.066.233.1539.4215.3414.0223.7829.684.75
+ +Table 14: Performance of various continual learning methods on the ODinM-13 benchmark under the 50-shot setting. FAP (%): Final Average AP. CAP (%): Cumulative Average AP. FFP (%): Final Forgetting Percentage of old tasks. + +
MethodsDFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLuna16ADNIMenengBreastTTN3kFAP↑CAP↑FFP↓
Joint (Upper)39.5666.5264.4159.2641.7843.8226.4852.9022.6166.4072.9325.6147.9248.48--
Non-Prompt-based CL
Sequential0.000.000.000.005.510.000.000.000.0022.190.000.0016.203.3814.9546.86
WiSE-FT8.1843.771.2913.0514.992.930.1110.590.080.0521.5510.086.4410.2423.3416.03
ZiRa2.075.580.395.454.893.560.230.780.453.034.855.518.583.4915.7837.28
Prompt-based CL
L2P29.1354.0534.7438.0035.6138.236.4134.5913.332.7446.2010.1819.6727.9135.205.66
DualPrompt24.8427.4925.4447.1827.3629.173.3321.8713.096.5716.329.2813.8220.4434.9416.91
S-Prompt23.7939.4533.6716.908.7429.776.0028.248.384.7861.7313.0821.4022.7630.867.92
CODA27.4265.2731.7440.6625.8536.8117.2843.7216.376.9354.2520.5525.9131.7540.864.37
DIKI34.2668.8234.0553.2633.0238.0613.9350.2720.745.4248.168.1734.7034.0642.275.85
NoRGa30.6958.1739.2353.8128.1238.8215.3146.1615.216.7644.0418.9621.8732.0939.125.70
iDPA(Ours)34.7959.0352.6458.1239.3337.3514.7852.7722.7024.5556.3210.9939.1038.6545.033.93
+ +Table 15: Efficiency comparison of different continual learning methods. Metrics include the number of parameters, floating point operations (FLOPs), memory consumption, total training time, and inference speed. Our method achieves the best trade-off with the lowest parameter count and competitive inference performance. + +
Methods#Params↓#FLOPs↓#Memory↓#Time↓Inference Speed↑
Joint (Upper)231.76M488.03 GMac13129M9h55min6.18 FPS
Sequential231.76M488.03 GMac13129M9h55min6.18 FPS
WiSE-FT231.76M488.03 GMac13129M9h55min6.18 FPS
ER231.76M488.03 GMac13129M11h15min6.18 FPS
ZiRa10.23M490.15 GMac8377M6h25min6.11 FPS
L2P6.97M601.50 GMac10288M7h50min5.08 FPS
DualPrompt4.83M583.82 GMac9417M7h36min5.25 FPS
S-Prompt2.73M590.89 GMac5366M8h24min5.13 FPS
CODA-Prompt10.97M583.82 GMac9803M9h03min5.26 FPS
DIKI8.76M583.82 GMac9754M7h49min5.16 FPS
NoRGa8.76M583.82 GMac9963M8h07min5.17 FPS
iDPA(Ours)3.34M506.00/501.00 GMac6590M5h46min5.93 FPS
+ +Table 16: Mean performance across 13 datasets under 1-shot, 5-shot, and 10-shot settings. $\sigma$ denotes the average improvement of our iDPA method over the baseline. Variance results are included to illustrate the stability across multiple runs. + +
MethodsDFUCKvasirOpticNBCCDCPM-17BreastCTBX11KKidneyTLum6ADNIMenengBreastTTN3k
16.6643.0414.6220.5531.135.332.157.380.300.3917.346.173.45
σ2.550.954.893.861.710.930.001.550.140.032.182.461.31
521.3750.2029.2039.1138.3319.656.0327.066.233.1539.4215.3414.02
σ1.620.863.703.242.631.240.010.021.681.261.341.583.80
1034.7959.0352.6458.1239.3337.3514.7852.7722.7024.5556.3210.9939.10
σ2.921.893.754.163.762.340.000.250.400.052.470.514.07
+ +![](images/d575ac10c08b47552e4f5e781c2894bda4f6a4372fd78492161dad4be0bc7d27.jpg) +Figure 7: Visualization results of iDPA compared with L2P and Zero-shot at the end of training with random seed 0 on ODinM-13. \ No newline at end of file diff --git a/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/images.zip b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1319b427d0853ae86e3328c8b6202b6364e3f876 --- /dev/null +++ b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:684f00577276a94f8108e1c77e63b3ef9c5ea7b688155857b0d8bd7821ce510f +size 1683960 diff --git a/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/layout.json b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d0840fd807d2ca31c2f4629eef40a1e33b5bdf11 --- /dev/null +++ b/ICML/2025/iDPA_ Instance Decoupled Prompt Attention for Incremental Medical Object Detection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a66b1f03d016040b164e16440a84e995871d8acb8f67f2c427da8bef42025497 +size 677145 diff --git a/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_content_list.json b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d8a795cb693d0b01cd3dcfb7e3dbbba5983d414b --- /dev/null +++ b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0fd2f54e017c09dc53697dcf2b0d0371da92ebd7419bb1987d3ad6321db28f6 +size 212881 diff --git a/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_model.json b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_model.json new file mode 100644 index 0000000000000000000000000000000000000000..998463134949c86388373d56081ef8e9e78fe870 --- /dev/null +++ b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3432a187fd4bc12f29371371140c128c5c0aba75fee1e4d77ed9566dfa2b4ebf +size 236783 diff --git a/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_origin.pdf b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3c1bdb7ca7c7291d344c5f2c1ee10ccad71ff934 --- /dev/null +++ b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/ec3b92d6-d965-474b-8dbc-c6cd2382d329_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71fb6a800e326acae2084899662b1ef5423982ff36ba3b5846b97c61ea7fb9a5 +size 785586 diff --git a/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/full.md b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2f99c3e58166c097cb8e9361fb883c3846195220 --- /dev/null +++ b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/full.md @@ -0,0 +1,394 @@ +# iN2V: Bringing Transductive Node Embeddings to Inductive Graphs + +Nicolas Lell1 Ansgar Scherp1 + +# Abstract + +Shallow node embeddings like node2vec (N2V) can be used for nodes without features or to supplement existing features with structure-based information. Embedding methods like N2V are limited in their application on new nodes, which restricts them to the transductive setting where the entire graph, including the test nodes, is available during training. We propose inductive node2vec (iN2V), which combines a post-hoc procedure to compute embeddings for nodes unseen during training and modifications to the original N2V training procedure to prepare the embeddings for this post-hoc procedure. We conduct experiments on several benchmark datasets and demonstrate that iN2V is an effective approach to bringing transductive embeddings to an inductive setting. Using iN2V embeddings improves node classification by 1 point on average, with up to 6 points of improvement depending on the dataset and the number of unseen nodes. Our iN2V is a plug-in approach to create new or enrich existing embeddings. It can also be combined with other embedding methods, making it a versatile approach for inductive node representation learning. Code to reproduce the results is available at https://github.com/Foisunt/iN2V. + +# 1. Introduction + +A graph neural network (GNN) may be trained without prior knowledge about the data it will encounter after deployment. This is because, in real-world graphs, new nodes and edges appear or disappear over time. For example, papers appear in citation networks, products are added to or removed from co-purchase graphs, and users join social platforms, creating new connections. These scenarios align with the inductive setting in graph learning, where test data is entirely unseen + +1Research Group on Data Science and Big Data Analytics, Ulm University, Ulm, Germany. Correspondence to: Nicolas Lell . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +during training. In contrast, the transductive setting allows access to the entire graph during training, while the test node labels remain hidden. From a broader perspective, the inductive setting resembles a single time step in a temporal graph learning task. In temporal GNNs, the goal is to learn representations over multiple snapshots of a graph or a graph with temporal information attached to the nodes and edges, respectively (Polleres et al., 2023; Longa et al., 2023). + +Various methods exist to compute node embeddings solely from graph edges (Perozzi et al., 2014; Grover & Leskovec, 2016; Donnat et al., 2018). This is important when node features are unavailable or for enriching existing features. When using text-based embeddings such as Bag of Words, some embeddings might be missing. For example, due to out-of-vocabulary words, the Citeseer dataset (Sen et al., 2008) contains 15 nodes with empty embeddings. Other reasons for new nodes include newly created social media accounts that yet lack user-provided information. + +Absent node features pose a challenge for GNNs. Message-passing GNNs such as GraphSAGE (Hamilton et al., 2017) can infer missing information from neighboring nodes. MLP-based GNNs like Graph-MLP (Hu et al., 2021) and GLNN (Zhang et al., 2022) rely solely on node features, i.e., do not have message-passing, and cannot handle such cases. A common approach to addressing missing features is Feature Propagation (Rossi et al., 2022). It propagates node features along graph edges, filling in missing features while preserving existing features. + +We expand on this idea and introduce iN2V, a general and simple post-hoc approach to using trained embeddings to induce embeddings for nodes appearing in the inductive test set. We modify the training process for the popular transductive embedding model node2vec (Grover & Leskovec, 2016) to foresee future embeddings. We propose a simple but effective post-hoc procedure for propagating and updating embeddings for new nodes in the inductive setting, effectively enabling representation learning for unseen nodes. Unlike Feature Propagation (FP), our iN2V adapts the embeddings of training nodes during propagation, a crucial feature to enable the inductive setting. Since iN2V operates on N2V embeddings rather than raw node features provided by the datasets, it avoids relying on external node-specific information. Furthermore, it can be combined + +with any existing embeddings from the datasets, enriching the node representations and improving downstream task performance. We evaluate the effectiveness of our iN2V embeddings on a range of homophilic and heterophilic datasets using the MLP and GraphSAGE models. Averaged over the other parameters, iN2V outperforms Feature Propagation by 1 point on homophilic and 0.7 points on heterophilic datasets, 1.3 points when using MLP and 0.6 points when using GraphSAGE as the classification model. When using only the extended N2V embeddings, iN2V outperforms FP by 1.3 points and by 0.6 points when using both the extended N2V embeddings and graph features. Finally, when using at most $20\%$ of the nodes for embedding generation and training, iN2V outperforms FP by 1.2 points vs 0.8 points when using at least $60\%$ of the nodes for training. In summary, our contributions are: + +- Introduce iN2V, a simple and effective post-hoc method for extending trained node embeddings to unseen nodes. +- Enhance node2vec training with modifications that prepare their adaptability to inductive settings. +- Demonstrate performance gains, showing that both the inductive extension and modified training improve classification accuracy. +- Validate iN2V's robustness, showing it remains effective even when trained on only $10\%$ of nodes; in some cases outperforming using the original dataset features. + +# 2. Related Work + +# 2.1. Node Embeddings without Features + +Many node feature generation methods are based on random walks. DeepWalk (Perozzi et al., 2014) generates random walks from a graph, treats each random walk as a sentence, and trains word2vec (Mikolov et al., 2013) embeddings on those random walks. Building on DeepWalk, node2vec (Grover & Leskovec, 2016) introduces a biased random walk generator to better balance between locality and exploration by giving distinct probabilities to return to the previous node, go to a node connected to the previous node, or visit a node not connected to the previous node. LINE (Tang et al., 2015) generates two sets of embeddings independently and concatenates them afterward. The first embedding optimizes that neighbors are similar, and the second embedding that nodes with many connections have similar embeddings. Another approach to node embeddings is subgraph2vec (Narayanan et al., 2016), which first generates rooted subgraphs for all nodes and then learns skip-gram embedding where the subgraphs of neighboring nodes are used as context for the current node. While these approaches build on the idea that neighboring nodes should have more + +similar embeddings than distant nodes, struc2vec (Ribeiro et al., 2017) focuses on neighborhood degree patterns. Random walks are done based on edge weights of a fully connected graph, with edge weights calculated from the similarity of the degree distribution of the neighborhood of each node. GraphWave (Donnat et al., 2018) treats spectral graph wavelets as distributions to provide nodes with similar structural roles similar embeddings. Sub2Vec (Adhikari et al., 2018) trains subgraph-level embeddings by applying paragraph2vec (Le & Mikolov, 2014) with additional random walks to better preserve the neighborhood and structural properties of the subgraphs. There are also approaches for graph-level embeddings like graph2vec (Narayanan et al., 2017). It treats subgraphs as vocabulary and applies the doc2vec skip-gram training process. Learning graph-level representations is less related to our work, which aims to learn features for nodes unseen during training. + +RDF2vec (Ristoski & Paulheim, 2016) is similar to DeepWalk but applied to Resource Description Framework (RDF) graphs. It first converts an RDF graph into sequences and then trains word2vec (Mikolov et al., 2013) on them. There are different follow-up works for RDF2vec; for example, (Hahn & Paulheim, 2024) used RDF2vec in a continual setup by sampling new walks starting from the new edges or entities. Other embedding models used for Knowledge Graphs (KGs) aim to learn not only node but also edge or relation embeddings. TransE (Bordes et al., 2013) embedded nodes and relations such that if there is a relation between two entities. The first entity's embedding added with the relation embedding is trained to be close to the second entity's embedding. Follow-up work (Wang et al., 2014; Trouillon et al., 2016; Sun et al., 2019) replaced the addition of real-valued vectors with other operations and other vector spaces, such as Hadamard product in a complex-valued vector space, and investigated regularizer and the effect of inverse relations on Knowledge Graph Completion (KGC) performance (Lacroix et al., 2018). ReFactorGNNs (Chen et al., 2022) try to combine the good KGC performance of these factorization-based models with the ease of feature integration and inductive applicability of GNNs into a single model for KGC. FedE (Chen et al., 2021) is a federated knowledge embedding framework that can use any knowledge graph embedding with multiple clients, each only having access to a part of the knowledge graph. This is not applicable to an inductive setting, as the test graph is not seen at any time during the embeddings. + +The Unifying Model (Jia & Benson, 2022) fits a Markov Random Field to a graph, which can be similar to label propagation or a linear GCN depending on the attributes used. The model allows the sampling of new graphs from the training distribution, which differs from our task of providing embeddings to unseen nodes in graphs with no attributes. + +# 2.2. GNNs + +The most well-known graph neural network is GCN (Kipf & Welling, 2017), aggregating neighbors with weights based on their degree. GraphSAGE (Hamilton et al., 2017) modified GCN by considering the embedding of the current node separately from the neighbor aggregation and introducing sampling schemes to deal with large graphs. Other modifications of GCN used attention to assign different weights to neighbors (Velickovic et al., 2018) or make deep models easier to train by adding different kinds of skip connections (Chen et al., 2020; Xu et al., 2018; Sancak et al., 2024). For efficient models, besides reducing the number of message-passing layers (Wu et al., 2019), some works trained MLPs without using the edges for inference. Graph-MLP (Hu et al., 2021) incorporated edge information by pulling neighboring embeddings closer together with a contrastive loss. GLNN (Zhang et al., 2022) and NOSMOG (Tian et al., 2023) distill GNNs into MLPs. NOSMOG additionally increased robustness to noise with adversarial feature augmentation and concatenates DeepWalk embeddings to the input for capturing more structured information. + +Homophily is the characteristic of a graph in which neighboring nodes share the same class. Heterophilic graphs, i.e., graphs where neighbors usually belong to different classes, are an active area of research with work on how to measure homophily (Lim et al., 2021; Platonov et al., 2023a; Mironov & Prokhorenkova, 2024) and heterophilic datasets (Platonov et al., 2023b). Models that can better deal with heterophilic data use novel aggregations, consider multi-hop neighborhoods, distinguish homo- and heterophilic edges, or make the graph more homophilic using rewiring (Zhu et al., 2020; Lim et al., 2021; Kohn et al., 2024; Abu-El-Haija et al., 2019; Lell & Scherp, 2024; Chien et al., 2021; Luan et al., 2022; Bi et al., 2024). + +# 3. Inductive N2V + +The principal idea of our inductive N2V (iN2V) algorithm is to simply assign each test node the average embedding of its neighbors from the training set. This is repeated for multiple iterations to also deal with test nodes with longer distances to training nodes. The N2V embedding training is modified so that the embedding of training nodes are optimally prepared to induce embeddings to new nodes only seen during testing. + +# 3.1. Notation and Formalization + +Given a graph $G = (E, V)$ with node set $V \subset \mathbb{N}$ and edge set $E \subseteq V \times V$ , with disjoint training, validation, and test sets $V_{train}, V_{val}, V_{test} \subset V$ . $N(v) = \{x \in V \mid (v, x) \in E\}$ is the set of neighbors of $v$ . For each node $v \in V$ , + +![](images/49be2c0a458b5aa71058a527bc190f51c2ea306b072b1c64f3656e816b0f99d8.jpg) + +![](images/5a9007b2d8563b4737e7af57e9935df23c860b0d516b1467399ebe77f639197a.jpg) + +![](images/5d3d4dbaa828545f219ca65dcfb63f9871b9007fa440db178a0c61d89098fb00.jpg) +Figure 1. Three example graphs illustrate the post-hoc extensions to the white test nodes after obtaining embeddings for the gray training nodes in the inductive setup. Figure a) shows a simple example where node 2 obtains the average embedding of nodes 1 and 3. In Figure b) node 5 got a distant embedding during embedding training, but during the post-hoc extension it should be updated to move between the embeddings of the nodes from graphs $G_{A}$ and $G_{B}$ . Finally, Figure c) illustrates how iN2V needs only four iterations to provide embeddings to all nodes. + +we want to train an embedding $h_v \in \mathbb{R}^d$ . In the inductive setting, only the subgraph induced by $V_{train}$ is available for training the embeddings. These embeddings then need to be extended to the remaining nodes $v_i \in V \setminus V_{train}$ in the validation and test set. As we use existing benchmark datasets, our nodes also have classification labels $Y \in \mathbb{N}^{|V|}$ and existing node features $X \in \mathbb{R}^{|V| \times \hat{d}}$ . + +# 3.2. Example + +We motivate the different components of our post-hoc extension and illustrate their effect with the three example graphs a), b), and c) in Figure 1. Training nodes are shown in gray, and test nodes in white. The latter are hidden during training in the inductive case. In graph a) it is quite straightforward that $v_{2}$ should obtain the average of the embeddings of $v_{1}$ and $v_{3}$ . In graph b), $G_{A}$ and $G_{B}$ are connected subgraphs of multiple training nodes with similar embeddings. When following the averaging idea, $v_{4}$ gets an embedding that is close to the average embedding in $G_{A}$ but skewed towards the embedding of $v_{5}$ . Considering that the training embeddings were generated by N2V, the connected nodes in $G_{A}$ and $G_{B}$ got meaningful embeddings during training, while $v_{5}$ has a distant embedding as it has no neighbors in the training set and therefor only appeared as a negative sample during training. Contrary to Feature Propagation (Rossi + +et al., 2022), in this case, it is useful to allow the adaption of input (training) embeddings. Following this line of thought while also maintaining some stability for the nodes with existing embeddings, each embedding should be a combination of itself and the average neighbor embedding. When doing multiple iterations of such an averaging procedure, the embedding of $v_{5}$ moves in between the embeddings of nodes of $G_{A}$ and $G_{B}$ . However, too many iterations pose the challenge that all embeddings of individual nodes will converge to the average node embedding of that graph. Graph c) illustrates the challenge of extending embeddings into longer sequences of test nodes. When just averaging the neighborhood embedding for four iterations, the embedding of $v_{17} = v_{7} / 4^{3}$ is close to zero. Feature Propagation handles this by keeping the input embeddings fixed and iterating many times until convergence. We already established the usefulness of adapting input embeddings in the example graph b). Therefore, we handle long sequences and high-degree nodes by considering only nodes that already have an embedding for averaging each iteration. That means that after the first iteration $v_{9} = v_{7}$ , after the second iteration $v_{8}, v_{13}, v_{10}, v_{9} = v_{7}$ , and so on. This leads to $v_{17} = v_{7}$ after only four iterations. + +# 3.3. Generating Inductive Embeddings + +We propose an iterative algorithm to extend trained embeddings to the unseen nodes. Let $h_v^{(t)}$ be the embedding of node $v$ after $t$ iterations of our algorithm. We use a lookup vector $s \in \{0,1\}^{|V|}$ with $s_v$ being the $v$ -th element in the vector $s$ to keep track of which nodes already have embeddings and use $N_s(v) = \{x \in N(v) \mid s_x = 1\}$ to denote the set of neighbors which have an embedding. The mean embedding of a set of nodes $S$ is $m_S^{(t)} = \frac{1}{|S|} \sum_{v \in S} h_v^{(t)}$ . For initialization, $h_v^{(0)}$ is set to the N2V embedding $h_u$ for training nodes $u \in V_{train}$ and to 0 for nodes $w \in V \setminus V_{train}$ not from the training set. The lookup vector is initialized with $s_u = 1$ and $s_w = 0$ . Then $h^t$ is calculated from $h^{(t-1)}$ by: + +$$ +h _ {v} ^ {(t)} = \left\{ \begin{array}{l l} h _ {v} ^ {(t - 1)} & \text {i f N _ {s} (v) = \emptyset} \\ \lambda h _ {v} ^ {(t - 1)} + (1 - \lambda) m _ {N _ {s} (v)} ^ {(t - 1)} & \text {i f s _ {v} = 1} \\ m _ {N _ {s} (v)} ^ {(t - 1)} & \text {e l s e} \end{array} \right. \tag {1a} +$$ + +This means that if $v$ has no neighbor with an embedding, $h_v$ does not change (1a). If both $v$ and at least one neighbor of $v$ have an embedding, we calculate the convex combination of $h_v$ and the mean neighbor embedding $m_{N_s(v)}$ (1b). Note, for $\lambda = 1$ , the embedding of a node will not change once it is set, and with $\lambda \leq 1$ all embeddings will be updated depending on their respective neighborhoods. If $v$ does not have an embedding but at least one neighbor has an + +embedding, we set $v$ 's embedding to the mean neighbor embedding (1c). This is done for multiple iterations. After each iteration, $s$ is updated by setting entries for nodes that got an embedding to 1. We do enough iterations such that each node with a path to at least one training node gets an embedding and additional delay-many iterations to update the embeddings of nodes like $v_{5}$ in the example Figure 1 b). + +# 3.4. Boosting Inductive Performance + +We propose two different approaches which modify the training to improve the generation of inductive embeddings. + +Sampling-based To promote embeddings that are better suited to the inductive extension of embeddings, we simulate a simple version of the post-hoc extensions during training. In each epoch, some features are replaced by their mean neighborhood embedding with probability $r$ . + +$$ +h _ {v} = \left\{ \begin{array}{l l} m _ {N (v)} & \text {w i t h p r o b a b i l i t y} r \\ h _ {v} & \text {e l s e} \end{array} \right. +$$ + +Loss-based In addition to the sampling-based approach, we also introduce a loss-based approach to prepare the embeddings for our inductive extension. When extending the trained embeddings to the inductive nodes during inference, we set the embedding of new nodes to their mean neighborhood embedding. The first loss promotes this relationship in the trained embeddings by pulling a node's own embedding closer to its mean neighborhood embedding: + +$$ +\mathcal {L} _ {\text {c l o s e}} (v) = - \log \left(\sigma \left(h _ {v} \cdot m _ {N (v)}\right)\right). +$$ + +A trivial solution to minimize this loss would be to assign identical embeddings to all neighboring nodes, so we add a second loss which promotes diversity in embeddings of the individual neighbors of each node: + +$$ +\mathcal {L} _ {d i v} (v) = \frac {1}{| N (v) | ^ {2}} \sum_ {u, w \in N (v)} \mathrm {s i m} (h _ {u}, h _ {w}), +$$ + +where sim is cosine similarity. The final loss for iN2V is + +$$ +\mathcal {L} (v) = \mathcal {L} _ {n 2 v} (v) + \alpha \cdot \mathcal {L} _ {\text {c l o s e}} (v) + \beta \cdot \mathcal {L} _ {\text {d i v}} (v) +$$ + +with hyperparameters $\alpha$ and $\beta$ . The N2V loss $\mathcal{L}_{n2v}$ is calculated using random walks. In each epoch, every node appears on average walks per node times walk length often in these random walks. The random walks are batched for training, but when the same node appears multiple times in a batch, the additional calculations for $\mathcal{L}_{close}$ and $\mathcal{L}_{div}$ are redundant. To reduce these redundant calculations, we + +Table 1. Graph features, nodes, edges, classes, and adjusted homophily. Upper: homophilic, bottom: heterophilic graphs. + +
Dataset|X||V||E||C|\( hom_{adj} \)
Cora143327081055670.77
CiteSeer37033327910460.67
PubMed500197178864830.69
Computers76713752491722100.68
Photo745765023816280.79
WikiCS30011701431726100.58
Actor93276003001950.01
Amazon-R.300244929305050.14
Roman-E.300226623292718-0.05
+ +sample the nodes to calculate $\mathcal{L}_{close}$ and $\mathcal{L}_{div}$ independent from the random walks such that each node is the center node for these losses once per epoch. + +# 4. Experimental Apparatus + +# 4.1. Datasets + +We use the Cora (Sen et al., 2008), CiteSeer (Sen et al., 2008), and PubMed (Namata et al., 2012) citation graphs, the Computers (Shchur et al., 2018) and Photo (Shchur et al., 2018) co-purchase graphs, and the WikiCS Wikipedia page graph (Mernyei & Cangea, 2020). These graphs are homophilic, i.e., neighboring nodes usually share the same class. The following graphs are more heterophilic, i.e., neighboring nodes usually do not share the same class. Actor (Pei et al., 2020) is a Wikipedia co-occurrence graph, Amazon-ratings (Platonov et al., 2023b) is a co-purchase graph, and Roman-empire (Platonov et al., 2023b) is a text-based graph. Table 1 shows more details about the datasets. We report adjusted homophily as it accounts for class imbalance (Platonov et al., 2023a). + +# 4.2. Procedure + +We investigate how well iN2V works for differently sized data splits into training and unseen validation or test nodes. For all datasets, we use 5 splits of different sizes that always utilize the full dataset and have a validation and test set of the same size. The training set sizes are $10\%$ , $20\%$ , $40\%$ , $60\%$ , and $80\%$ , with respective validation and test set sizes of $45\%$ , $40\%$ , $30\%$ , $20\%$ , and $10\%$ of all nodes. + +First, we prepare 10 splits from different random seeds for each of the five split sizes. Then we train N2V inductively on the training subgraphs. For the N2V hyperparameter search, we use a grid search on three of the ten splits. The embeddings from the training set are extended to the validation set using iN2V. The final N2V hyperparameters are chosen based on the validation accuracy of logistic regression on these embeddings. These hyperparameters are used to train + +and store embeddings for all 10 splits. Then we evaluate the embeddings using MLP and GraphSAGE (Hamilton et al., 2017). Additionally, we investigate the concatenation of the extended N2V embeddings with the original graph features. + +Regarding the extended embeddings, we compare different setups of iN2V. The first is the "frozen" setup $(\lambda = 1)$ , where embeddings do not change after being set. The second one is the best post-hoc setup, where $\lambda$ and delay are searched as hyperparameters. The third and fourth ones combine the sampling-based and loss-based modifications to N2V with the best post-hoc setup. We compare these results with different baselines. The comparable baselines that use the same information as iN2V are plain N2V used inductively, i.e., only training nodes have embeddings, and using Feature Propagation (Rossi et al., 2022) to extend N2V embeddings to the test nodes. Using the original graph features and training N2V embeddings in a transductive setup are two more baselines. They are not directly comparable with iN2V as they use more information but are nevertheless useful for perspective. Additional experiments with other GNNs can be found in Appendix D. + +# 4.3. Hyperparameter Optimization + +We make all datasets undirected. For N2V, we use a context size of 10 for positive samples and 1 negative per positive sample. We use a batch size of 128 and early stopping with patience of 50 epochs. For every epoch, we sample 10 walks of length 20 per node. We do grid search over all combinations of $p$ and $q \in \{0.2, 1, 5\}$ , embedding size $d \in \{64, 256\}$ , and learning rate $\in \{0.1, 0.01, 0.001\}$ . For the sampling based method, we try $r \in \{0.2, 0.4, 0.6, 0.8\}$ . The loss weights $\alpha \in \{0, 0.1, 1, 10\}$ and $\beta \in \{0, 0.001, 0.01, 0.1\}$ are tuned separately from $r$ . For Feature Propagation, we search the number of iterations in $\{10, 20, 40, 60\}$ . + +For MLP and GraphSAGE, we use grid search for the full 10 seeds per split. We search over all combination of number of layer $\in \{1,\ldots 5\}$ , hidden size $\in \{64,512\}$ , learning rate $\in \{0.01,0.001\}$ , weight decay $\in \{0,0.0001,0.01\}$ , dropout $\in \{0.2,0.5,0.8\}$ , and whether to use jumping knowledge (Xu et al., 2018) connections. + +# 5. Results and Discussion + +# 5.1. Key Results + +Comparison of iN2V vs. Baselines Table 2 shows the performance of GraphSAGE using iN2V embeddings and compares them to normal N2V in the inductive setting (test nodes have no features) and N2V embeddings extended by Feature Propagation. We also report performance on the original graph features and transductive N2V embeddings to indicate how close the baselines and iN2V are to the + +ideal case of no missing nodes or features. We can see that iN2V outperforms the comparable baselines in most cases. The notable exceptions are the Actor and Roman-empire datasets, where N2V embeddings generally perform poorly. + +In some cases, using the iN2V embeddings outperforms the original features, e. g., in the $10\%$ training splits of Cora, Computers, and Amazon-Ratings. This implies that neighborhood information is more important for these datasets than the external information from the original features and that the original features do not have sufficient neighborhood information. Appendix B provides tables with the full results, including the $60\%$ train split for GraphSAGE and all MLP results. We observe for some datasets that the iN2V embeddings outperform the original features for more splits when using MLP as a model, as it can not generate neighborhood information internally as GraphSAGE does. + +Homophilic vs Heterophilic Datasets While we generally see quite good performance of models using N2V embeddings on homophilic datasets, this is not the case for heterophilic ones. This is expected since N2V provides similar embeddings for neighbors, which matches the homophilic label structure. On Actor, the models using N2V embeddings only learn to predict the largest class (25.86%). On Roman-empire, the models only predict the largest class (13.96%) when using a few training nodes and become slightly better when using most of the data during training. Roman-empire is the dataset with the lowest average degree in our comparison. This means the graph is prone to splitting into many disconnected subgraphs in the inductive setting. This makes this dataset challenging for training any random-walk-based embeddings. It also explains the rise in performance for the largest training split. Amazon-ratings is an interesting exception to this trend because even though it has a low adjusted homophily of 0.14, N2V embeddings seem to work as well as on the homophilic datasets and even outperform using the original features for small training splits. Table 5 paints a similar picture when using MLP instead of GraphSAGE, with the difference that even for 80% training data, the MLP does not perform better than just predicting the largest class for the Roman-empire dataset. + +Influence of GNN Choice and N2V Modifications Table 3 shows the performance aggregated over all datasets and train splits. The first two columns show the performance impact of the loss-based and sampling-based modifications to N2V when using only those embeddings as input for the GNNs. Our post-hoc extension, which can adapt input embeddings, outperforms Feature Propagation and the frozen post-hoc variant. This shows that adapting existing features is an important capability of iN2V. The loss-based and sampling-based modifications to N2V training provide + +Table 2. Accuracy of best iN2V variant vs baseline embeddings. The underlying model is GraphSAGE. Gray numbers are not directly comparable as they use additional information, i.e., original features, or the transductive setup. + +
Percentage of training data
Dataset10%20%40%80%
Cora
N2V (inductive)42.183.5259.914.1575.071.9284.501.44
Feature Propagation77.912.6279.482.1981.031.8584.132.35
iN2V (own)78.881.4580.941.5883.301.0984.462.08
Original features75.272.6383.371.1786.231.7787.051.20
N2V (transductive)79.251.4581.661.2983.810.9586.011.99
CiteSeer
N2V (inductive)34.173.4342.762.1556.123.2468.892.61
Feature Propagation56.851.8460.532.0363.032.2169.761.82
iN2V (own)57.880.9160.781.7863.021.8868.922.07
Original features69.851.4272.860.9874.931.5676.821.77
N2V (transductive)57.141.5161.331.0966.451.6072.762.54
Pubmed
N2V (inductive)66.025.5774.732.3679.841.7882.610.52
Feature Propagation76.370.6277.720.5080.740.7282.430.86
iN2V (own)79.930.5080.800.4682.140.4382.590.63
Original features85.950.4786.990.2888.320.4889.850.56
N2V (transductive)81.360.4982.200.5083.220.3483.660.65
Computers
N2V (inductive)77.642.8184.440.8487.180.7789.350.70
Feature Propagation82.790.6486.430.6389.400.4590.870.64
iN2V (own)88.360.5889.670.4090.840.3791.380.51
Original features87.520.4889.760.4091.120.2091.500.48
N2V (transductive)89.180.3890.160.4490.770.3991.160.56
Photo
N2V (inductive)85.481.2887.731.4090.980.7192.210.96
Feature Propagation87.431.0990.140.3491.570.4292.950.79
iN2V (own)90.510.7291.700.4492.370.4693.080.77
Original features93.740.4294.590.3795.270.3895.590.77
N2V (transductive)91.290.4192.290.3092.900.4593.330.71
WikiCS
N2V (inductive)67.782.6874.221.6578.210.6981.620.86
Feature Propagation74.772.2778.041.0080.170.6881.931.08
iN2V (own)78.910.6180.190.7081.280.6182.371.01
Original features80.750.6482.560.8184.280.5585.880.70
N2V (transductive)79.750.4180.930.6481.880.5582.810.68
Actor
N2V (inductive)25.141.0925.560.9025.541.1525.681.42
Feature Propagation25.221.2925.391.0625.140.5925.051.34
iN2V (own)25.180.9725.500.7025.400.9925.792.27
Original features31.770.7133.840.9136.480.5436.711.23
N2V (transductive)25.500.7625.560.9525.410.7424.551.70
Amazon-ratings
N2V (inductive)37.470.4740.690.6344.710.8549.471.07
Feature Propagation38.970.7641.680.7645.720.7550.041.42
iN2V (own)40.020.7942.010.4845.480.5750.382.22
Original features39.201.0641.660.7048.070.5857.340.97
N2V (transductive)41.820.6643.690.5246.310.7849.800.94
Roman-empire
N2V (inductive)13.960.3514.100.3215.770.7616.812.94
Feature Propagation13.231.4413.890.4415.490.6721.971.25
iN2V (own)13.790.3913.860.2914.480.6318.551.23
Original features66.090.7570.280.6374.410.4782.901.09
N2V (transductive)13.820.2213.860.3715.351.5327.461.36
+ +Table 3. Effect of model and input, test accuracy averaged over all datasets and splits. + +
Input +Modelembedembed || feat.
MLPSAGEMLPSAGE
N2V (inductive)27.9159.4568.7273.51
Feature Propagation62.9763.2073.1774.09
frozen (λ = 1)61.7462.8872.4073.46
post-hoc64.3863.7573.9974.24
p-h w losses64.5763.9574.0474.32
p-h w sampling64.4263.8373.9574.02
Original features71.0174.7371.0174.73
N2V(transductive)63.8965.0774.1575.57
+ +a small boost to performance. When we compare the results for MLP vs GraphSAGE, we see that the differences are bigger when using an MLP. Interestingly, MLP outperforms GraphSAGE when using post-hoc embeddings. This is the case because GraphSAGE's main advantage over MLP is its ability to aggregate neighborhood information internally. N2V already encodes this information in the embeddings, and additionally, our post-hoc extension is similar to the aggregation performed by models like GraphSAGE. This effect might also be reinforced by the fact that we use logistic regression on the embeddings for the N2V hyperparameter selection, which is closer to an MLP than it is to GraphSAGE. Detailed results per dataset and split are shown in Appendix B. + +Combining N2V embeddings and Original Features For datasets that already have features, N2V embeddings can be used to provide additional structural information to GNNs. The third and fourth columns of Table 3 show our results in an aggregated way, while per dataset results can be found in Appendix C. Using both the N2V embeddings and original features as input for the models increases overall performance by about 10 points compared to just using N2V embeddings. The increase is bigger for the N2V (inductive) baseline, as the test nodes do not have N2V embeddings in that scenario. When just using N2V embeddings, MLP has a slightly higher average performance than GraphSAGE in most setups; this switches to GraphSAGE having a slight lead over MLP when using both as input. + +Compared to the original features baseline, GraphSAGE actually loses an average of .5 performance points when also using the N2V as input, whereas MLP gains 3 points. This again shows that MLP can benefit from more structural information while GraphSAGE is already capable of aggregating this information by itself. The increase heavily depends on the dataset; see detailed per-dataset results in Appendix C. Both models gain over 50 points on Roman-empire and have only smaller gains on the other datasets. + +![](images/80886f9d79f0ac1b614c63317e4e81514b8141defb594065c29564552197659c.jpg) +Figure 2. Influence of amount of training data. Results are averaged over all datasets. + +These dataset-specific differences are explained by the usefulness of N2V embeddings vs the usefulness of the original features. On Roman-empire, the graph is close to a sequence of words with few additional edges; the neighborhood information encoded by N2V embeddings does not bring much useful information. + +Influence of Amount of Training Data Figure 2 visualizes the effect of the training set size averaged over all datasets. In general, performance increases with more training data. The only exception is N2V in the inductive setting when using MLP, as the test nodes do not have embeddings, and performance stays at a random guessing level. GraphSAGE can compensate for the missing test embeddings. While starting with low performance for little training data, it is close to the methods that actually extend embeddings to test nodes for $80\%$ training data. The MLP model with Feature Propagation or frozen post-hoc $(\lambda = 1)$ consistently performs below the other post-hoc variants. GraphSAGE with Feature Propagation and frozen post-hoc catch up and outperform some other variants when using $80\%$ training data. Another interesting observation is that for MLP, the transductive N2V starts in line with the post-hoc variants but rises less when the training data increases. For GraphSAGE, transductive N2V already starts higher than the post-hoc variants and even widens the lead when the amount of training data increases. + +Applying of Loss and Sampling Modification with Feature Propagation While our modifications to the N2V training procedure are motivated by our post-hoc extension, these two parts act independently. The loss-based and sampling-based modifications change the generated N2V embeddings, and the post-hoc algorithm extends these independent of their exact values. This means that instead of using our post-hoc algorithm, we can also use Feature Propagation to extend the modified N2V embeddings. Ta + +Table 4. Applying our loss- and sampling-based N2V modifications when using Feature Propagation to extend embeddings. + +
modelMLPGraphSAGE
FP62.9763.20
FP w loss63.0463.22
FP w sampling62.9862.98
+ +ble 4 shows the results of this experiment averaged over all datasets. Feature Propagation gains around .05 points with the loss modification of the embeddings for both models. The sampling-based modification does not change the MLP performance, but it reduces the GraphSAGE performance by .2 points. This is lower than the average of .2 points gained with post-hoc with the losses and .06 gained by the sampling-based modification in Table 3 and shows the synergy of our post-hoc methods with the N2V modifications. + +![](images/6540d7b1bb99417062756652aee601e026f55c70fb5128c80472f15f0cbfdad8.jpg) +5.2. Ablation Study on the Post-hoc Method +Figure 3. Ablation of the effect of $\lambda$ for different delays vs Feature Propagation and MatMul. +Figure 4. Validation accuracy for different weights for $\alpha$ and $\beta$ . + +We perform ablation studies on our post-hoc method and the loss modification. For this, we use the $40\%$ training split, set the N2V hyperparameters $p$ and $q$ to 1, embeddings size to 256, and learning rate to 0.01. For a sensitivity analysis of these hyperparameters, see Appendix A. + +Figure 3 shows the effect of $\lambda$ and delay in our post-hoc extension, Feature Propagation, as well as a MatMul baseline that multiplies the embeddings matrix iter-many times with the adjacency matrix. When increasing iter or delay, the post-hoc methods and MatMul increase to a maximum, which depends on $\lambda$ , and then drop off again. This nicely shows the trade-off we discuss in Section 3. More iterations allow an adaption to new paths from the test split, but too many iterations lead to a convergence of all embeddings to a graph average. For $\lambda = 1$ , this is not the case as embeddings do not change once they are set, which means that the embedding and performance are fixed for delay $\geq 0$ . As Feature Propagation keeps the training embeddings fixed, its performance increases with more iterations. Overall, we can observe that adapting training embeddings is important, as post-hoc with $\lambda < 1$ and MatMul outperform both Feature Propagation and post-hoc with $\lambda = 1$ . + +Figure 4 shows the effect of the loss weights $\alpha$ and $\beta$ on logistic regression validation accuracy. As we already saw in our main results, the loss-based N2V modification gives a small performance improvement. The figure suggests that $\alpha$ has a bigger influence on the performance as long as $\beta$ is not too high. + +![](images/e6e5316ff5c98ca9fc693979b67671624fbd8c41230f58e9fe07cc7a6f859c78.jpg) +(a) Cora + +![](images/ed1e76e253a2397f9f6dbde64e4ce364f68a4f89bbf5b2d4d69e29c6dd0f0d16.jpg) +(b) Photo + +# 6. Limitations and Future Work + +Our iN2V algorithm uses N2V and shares some of the limitations. N2V embeddings do not add any information for datasets where neighborhood structures alone are irrelevant. We have observed this case with the Actor and Roman-Empire datasets, where the performance when using only N2V embeddings as model input was close to predicting the largest class. Many molecule datasets consist of thousands of individual subgraphs, where the train and test split is done on a per-graph level. In cases where there is no path between the training and test nodes, our post-hoc extensions + +(like Feature Propagation) cannot provide embeddings for test nodes in an inductive setting. Our method generally can only provide embeddings to test nodes with a path to at least one train node. If this affects only a few nodes, then iN2V still shows good performance. We have shown this with Cora and Citeseer, which have 78 and 438 components, respectively. Some of these components have no nodes in the training set when using the small training splits. + +As our post-hoc extension is flexible and not tied to N2V, other shallow embedding methods can be used for datasets where the neighborhood structure does not provide helpful information. For example, one could use struc2vec, which focuses on the similarity of neighborhood degree distributions, to obtain better embeddings for heterophilic datasets. In this case, it might be better to utilize the neighborhood-similarity-based graph struc2vec builds for the post-hoc algorithm. Another limitation and avenue for future work is that we used random sampling to create the dataset splits. Using random splits is important for a fair evaluation of different embeddings and models (Shchur et al., 2018), but in the inductive case, this leads to some unconnected training nodes in the splits with few training nodes. Especially for those splits, it might be beneficial to use a biased sampler that prefers nodes with edges to already sampled nodes to obtain better-connected training sets. + +Our iN2V is not limited to simple graphs; it can already deal with multi-edges and self-loops. Self-loops increase the influence of $h_v$ in Equation 1b to more than $\lambda$ as it also appears in $m_{N_s(v)}$ . Edge weights could be incorporated into the mean neighbor embedding $m_{N_s(v)}$ by replacing the mean with a weighted sum using normalized edge weights. For KGs, iN2V could be used as-is or by replacing N2V with a KG-focused embedding like TransE that embeds vertices and relations. When doing so, Equations 1b and 1c have to be adapted to incorporate the relation embeddings. + +# 7. Conclusion + +We introduced iN2V, a general post-hoc extension to induce embeddings to unseen nodes in the inductive setup. We modified the training algorithm of N2V to obtain embeddings better suited to this induction. Our extensive experiments on different datasets, training splits, and using different GNNs on the embeddings showed that iN2V works well and beats the comparable baselines. For some datasets and splits, iN2V even outperforms using the original graph features. In our detailed discussion and ablation, we have shown that our post-hoc extensions perform remarkably well. At the same time, the modifications of the N2V training have a smaller influence on the final performance. Our experiments also showed general limitations of N2V-based approaches for some of the heterophilic datasets, where the performance remained close to random. + +# Acknowledgements + +This work was performed on the computational resource bwUniCluster funded by the Ministry of Science, Research and the Arts Baden-Württemberg and the Universities of the State of Baden-Württemberg, Germany, within the framework program bwHPC + +# Impact Statement + +This paper aims to transfer shallow embedding methods like N2V to the inductive setup, where only the graph induced by training nodes is known during training. Our work can be used to apply GNNs to new nodes in the inductive setup, for example new users in social networks or new products in co-purchase graphs, and outperforms existing methods. While our work has implications, particularly in improving the handling of nodes with missing features or in dealing with feature-less graphs, we believe that no specific societal consequences require immediate emphasis in this context. + +# References + +Abu-El-Haija, S., Perozzi, B., Kapoor, A., Alipourfard, N., Lerman, K., Harutyunyan, H., Steeg, G. V., and Galstyan, A. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 21-29. PMLR, 2019. URL http://proceedings.mlr.press/v97/abu-el-haija19a.html. +Adhikari, B., Zhang, Y., Ramakrishnan, N., and Prakash, B. A. Sub2vec: Feature learning for subgraphs. In Phung, D. Q., Tseng, V. S., Webb, G. I., Ho, B., Ganji, M., and Rashidi, L. (eds.), Advances in Knowledge Discovery and Data Mining - 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part II, volume 10938 of Lecture Notes in Computer Science, pp. 170-182. Springer, 2018. doi: 10.1007/978-3-319-93037-4\_.14. URL https://doi.org/10.1007/978-3-319-93037-4_14. +Bi, W., Du, L., Fu, Q., Wang, Y., Han, S., and Zhang, D. Make heterophilic graphs better fit GNN: A graph rewiring approach. IEEE Trans. Knowl. Data Eng., 36(12):8744-8757, 2024. doi: 10.1109/ TKDE.2024.3441766. URL https://doi.org/ 10.1109/TKDE.2024.3441766. +Bordes, A., Usunier, N., García-Durán, A., Weston, J., and Yakhnenko, O. Translating embeddings for modeling multi-relational data. In Burges, C. J. C., + +Bottou, L., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pp. 2787-2795, 2013. URL https://proceedings.neurips.cc/paper/2013/hash/1cecc7a77928ca8133fa24680a88d2f9-Abstract.html. +Chen, M., Wei, Z., Huang, Z., Ding, B., and Li, Y. Simple and deep graph convolutional networks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 1725-1735. PMLR, 2020. URL http://proceedings.mlr.press/v119/chen20v.html. +Chen, M., Zhang, W., Yuan, Z., Jia, Y., and Chen, H. Fede: Embedding knowledge graphs in federated setting. In *IJCKG'21: The 10th International Joint Conference on Knowledge Graphs*, Virtual Event, Thailand, December 6 - 8, 2021, pp. 80-88. ACM, 2021. doi: 10.1145/3502223.3502233. URL https://doi.org/10.1145/3502223.3502233. +Chen, Y., Mishra, P., Franceschi, L., Minervini, P., Stenetorp, P., and Riedel, S. Refactor gnns: Revisiting factorisation-based models from a message-passing perspective. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/bitize/66f7a3df255c47b2e72f30b310a7e44a-Abstract-Conference.html. +Chien, E., Peng, J., Li, P., and Milenkovic, O. Adaptive universal generalized pagerank graph neural network. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=n6jl7fLxrP. +Donnat, C., Zitnik, M., Hallac, D., and Leskovec, J. Learning structural node embeddings via diffusion wavelets. In Guo, Y. and Farooq, F. (eds.), Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pp. 1320-1329. ACM, 2018. doi: + +10.1145/3219819.3220025. URL https://doi.org/10.1145/3219819.3220025. +Grover, A. and Leskovec, J. node2vec: Scalable feature learning for networks. In Krishnapuram, B., Shah, M., Smola, A. J., Aggarwal, C. C., Shen, D., and Rastogi, R. (eds.), Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pp. 855-864. ACM, 2016. doi: 10.1145/2939672.2939754. URL https://doi.org/10.1145/2939672.2939754. +Hahn, S. H. and Paulheim, H. RDF2vec embeddings for up-. dateable knowledge graphs-reuse, don't retrain! ESWC Posters and Demos, 2024. +Hamilton, W. L., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 1024-1034, 2017. URL https://proceedings.neurips.cc/paper/2017混沌/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html. +Hu, Y., You, H., Wang, Z., Wang, Z., Zhou, E., and Gao, Y. Graph-mlp: Node classification without message passing in graph. CoRR, abs/2106.04051, 2021. URL https://arxiv.org/abs/2106.04051. +Jia, J. and Benson, A. R. A unifying generative model for graph learning algorithms: Label propagation, graph convolutions, and combinations. SIAM J. Math. Data Sci., 4(1):100-125, 2022. doi: 10.1137/21M1395351. URL https://doi.org/10.1137/21m1395351. +Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=SJU4ayYgl. +Kohn, M., Hoffmann, M., and Scherp, A. Edge-splitting MLP: Node classification on homophilic and heterophilic graphs without message passing. In The Third Learning on Graphs Conference, 2024. URL https://openreview.net/forum?id=BQEb4r21cm. +Lacroix, T., Usunier, N., and Obozinski, G. Canonical tensor decomposition for knowledge base completion. In Dy, J. G. and Krause, A. (eds.), Proceedings + +of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 2869-2878. PMLR, 2018. URL http://proceedings.mlr.press/v80/lacroix18a.html. +Le, Q. V. and Mikolov, T. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pp. 1188-1196. JMLR.org, 2014. URL http://proceedings.mlr.press/v32/le14.html. +Lell, N. and Scherp, A. Hyperaggregation: Aggregating over graph edges with hypernetworks. In International Joint Conference on Neural Networks, IJCNN 2024, Yokohama, Japan, June 30 - July 5, 2024, pp. 1-9. IEEE, 2024. doi: 10.1109/IJCNN60899.2024.10650980. URL https://doi.org/10.1109/IJCNN60899.2024.10650980. +Lim, D., Hohne, F., Li, X., Huang, S. L., Gupta, V., Bhalerao, O., and Lim, S. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 20887-20902, 2021. URL https://proceedings.neurips.cc/paper/2021/bash/ae816a80e4c1c56caa2eb4e1819cbb2f-Abstract.html. +Longa, A., Lachi, V., Santin, G., Bianchini, M., Lepri, B., Lio, P., Scarselli, F., and Passerini, A. Graph neural networks for temporal graphs: State of the art, open challenges, and opportunities. Trans. Mach. Learn. Res., 2023, 2023. URL https://openreview.net/forum?id=pHcMat0gI. +Luan, S., Hua, C., Lu, Q., Zhu, J., Zhao, M., Zhang, S., Chang, X., and Precup, D. Revisiting heterophily for graph neural networks. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/bitize/092359ce5cf60a80e882378944bf1be4-Abstract-Conference.html. + +Mernyei, P. and Cangea, C. Wiki-cs: A wikipedia-based benchmark for graph neural networks. arXiv preprint arXiv:2007.02901, 2020. +Mikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient estimation of word representations in vector space. In Bengio, Y. and LeCun, Y. (eds.), 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013. URL http://arxiv.org/abs/1301.3781. +Mironov, M. and Prokhorenkova, L. Revisiting graph homophily measures. In The Third Learning on Graphs Conference, 2024. URL https://openreview.net/forum?id=fiFBjLDOLV. +Namata, G., London, B., Getoor, L., Huang, B., and EDU, U. Query-driven active surveying for collective classification. In 10th International Workshop on Mining and Learning with Graphs, 2012. +Narayanan, A., Chandramohan, M., Chen, L., Liu, Y., and Saminathan, S. subgraph2vec: Learning distributed representations of rooted sub-graphs from large graphs. CoRR, abs/1606.08928, 2016. URL http://arxiv.org/abs/1606.08928. +Narayanan, A., Chandramohan, M., Venkatesan, R., Chen, L., Liu, Y., and Jaiswal, S. graph2vec: Learning distributed representations of graphs. CoRR, abs/1707.05005, 2017. URL http://arxiv.org/abs/1707.05005. +Pei, H., Wei, B., Chang, K. C., Lei, Y., and Yang, B. Geom-gcn: Geometric graph convolutional networks. In ICLR 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=S1e2agrFvS. +Perozzi, B., Al-Rfou, R., and Skiena, S. Deepwalk: online learning of social representations. In Macskassy, S. A., Perlich, C., Leskovec, J., Wang, W., and Ghani, R. (eds.), The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, New York, NY, USA - August 24 - 27, 2014, pp. 701-710. ACM, 2014. doi: 10.1145/2623330.2623732. URL https://doi.org/10.1145/2623330.2623732. +Platonov, O., Kuznedelev, D., Babenko, A., and Prokhorenkova, L. Characterizing graph datasets for node classification: Homophily-heterophily dichotomy and beyond. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023a. URL http://papers.nips.cc/paper_files/paper/2023/bitstream + +01b681025fddbda8e935a66cc5bb6e9de- Abstract-Conference.html. +Platonov, O., Kuznedev, D., Diskin, M., Babenko, A., and Prokhorenkova, L. A critical look at the evaluation of gnns under heterophily: Are we really making progress? In ICLR 2023. OpenReview.net, 2023b. URL https://openreview.net/pdf?id=tJbbQfw-5wv. +Polleres, A., Pernisch, R., Bonifati, A., Dell'Aglio, D., Dobriy, D., Dumbrava, S., Etcheverry, L., Ferranti, N., Hose, K., Jimenez-Ruiz, E., Lissandrini, M., Scherp, A., Tommasini, R., and Wachs, J. How Does Knowledge Evolve in Open Knowledge Graphs? Transactions on Graph Data and Knowledge, 1(1):11:1-11:59, 2023. doi: 10.4230/TGDK.1.1.11. URL https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.11. +Ribeiro, L. F. R., Saverese, P. H. P., and Figueiredo, D. R. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017, pp. 385-394. ACM, 2017. doi: 10.1145/3097983.3098061. URL https://doi.org/10.1145/3097983.3098061. +Ristoski, P. and Paulheim, H. RDF2vec: RDF graph embeddings for data mining. In Groth, P., Simperl, E., Gray, A. J. G., Sabou, M., Krötzsch, M., Lécué, F., Flock, F., and Gil, Y. (eds.), The Semantic Web - ISWC 2016 - 15th International Semantic Web Conference, Kobe, Japan, October 17-21, 2016, Proceedings, Part I, volume 9981 of Lecture Notes in Computer Science, pp. 498-514, 2016. doi: 10.1007/978-3-319-46523-4\~30. URL https://doi.org/10.1007/978-3-319-46523-4_30. +Rossi, E., Kenlay, H., Gorinova, M. I., Chamberlain, B. P., Dong, X., and Bronstein, M. M. On the unreasonable effectiveness of feature propagation in learning on graphs with missing node features. In Rieck, B. and Pascanu, R. (eds.), Learning on Graphs Conference, LoG 2022, 9-12 December 2022, Virtual Event, volume 198 of Proceedings of Machine Learning Research, pp. 11. PMLR, 2022. URL https://proceedings.mlr.press/v198/rossi22a.html. +Sancak, K., Balin, M. F., and Catalyurek, U. Do we really need complicated graph learning models? - a simple but effective baseline. In The Third Learning on Graphs Conference, 2024. URL https://openreview.net/forum?id=0664MgKEVz. +Sen, P., Namata, G., Bilgic, M., Getoor, L., Gallagher, B., and Eliassi-Rad, T. Collective classification in network data. AI Mag., 2008. doi: 10.1609/aimag.v29i3.2157. + +Shchur, O., Mumme, M., Bojchevski, A., and Gunnemann, S. Pitfalls of graph neural network evaluation. CoRR, 2018. URL http://arxiv.org/abs/1811.05868. +Sun, Z., Deng, Z., Nie, J., and Tang, J. Rotate: Knowledge graph embedding by relational rotation in complex space. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=HkgEQnRqYQ. +Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., and Mei, Q. LINE: large-scale information network embedding. In Gangemi, A., Leonardi, S., and Panconesi, A. (eds.), Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence, Italy, May 18-22, 2015, pp. 1067-1077. ACM, 2015. doi: 10.1145/2736277.2741093. URL https://doi.org/10.1145/2736277.2741093. +Tian, Y., Zhang, C., Guo, Z., Zhang, X., and Chawla, N. V. Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/forum?id=Cs3r5KLdoj. +Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., and Bouchard, G. Complex embeddings for simple link prediction. In Balcan, M. and Weinberger, K. Q. (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pp. 2071-2080. JMLR.org, 2016. URL http://proceedings.mlr.press/v48/trouillon16.html. +Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. +Wang, Z., Zhang, J., Feng, J., and Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Brodley, C. E. and Stone, P. (eds.), Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27-31, 2014, Quebec City, Quebec, Canada, pp. 1112-1119. AAAI Press, 2014. doi: 10.1609/AAAI.V28I1.8870. URL https://doi.org/10.1609/aaai.v28i1.8870. + +Wu, F., Jr., A. H. S., Zhang, T., Fifty, C., Yu, T., and Weinberger, K. Q. Simplifying graph convolutional networks. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 6861-6871. PMLR, 2019. URL http://proceedings.mlr.press/v97/wu19e.html. +Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K., and Jegelka, S. Representation learning on graphs with jumping knowledge networks. In Dy, J. G. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 5449-5458. PMLR, 2018. URL http://proceedings.mlr.press/v80/xu18c.html. +Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km. +Zhang, S., Liu, Y., Sun, Y., and Shah, N. Graph-less neural networks: Teaching old mlps new tricks via distillation. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.net/forum?id=4p6_5HBWPCw. +Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., and Koutra, D. Beyond homophily in graph neural networks: Current limitations and effective designs. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/black/58ae23d878a47004366189884c2f8440-Abstract.html. + +# Supplemental Materials + +# A. Hyperparameter Sensitivity + +This section follows up on Section 5.2. Figure 5 shows the sensitivity of validation accuracy to the iN2V learning rate and embedding dimension while all other hyperparameters (including $\lambda$ , $\alpha$ , $\beta$ , and $r$ ) are aggregated in the densities. We can see that the embedding dimension has the biggest influence on the performance, especially for the Cora dataset. For Figure 6, which shows the sensitivity to $p$ and $q$ , we therefore fix the embedding dimension to 256. + +![](images/c36ad80cc0bce9d1d6e250362d88dc0b8cda46bd28ce3cd2219c67599ac61882.jpg) +(a) Cora + +![](images/08b942117cf4ca912ef523d88845ac4406a2e9e579e0d90364455cf8312330ff.jpg) +(b) Photo +Figure 5. Sensitivity to learning rate and embedding dimension. + +![](images/26219cf983de1c63581ba91aae27f0b2c287357308d2c4a523796451d4fce9b1.jpg) +(c) WikiCS + +# B. Complete Result Tables + +Tables 5 and 6 show the comparison of MLP and GraphSAGE on the iN2V embeddings vs baselines. N2V, applied in the inductive setting, and Feature Propagation are comparable because they have access to the same information during training, while original features and N2V in the transductive setting utilize more information (the original features and all nodes during training). Tables 7 and 8 show the MLP and GraphSAGE results when using different iN2V settings. + +# C. Combining Original Graph Features with Trained N2V Embeddings + +As shown by related work, shallow embeddings like DeepWalk or N2V can be combined with the original graph features to improve the performance of GNNs. This is especially helpful for MLPs who do not have access to structure information when only using the default graph features (Tian et al., 2023). Tables 9 and 10 compare the iN2V embeddings vs baselines with the original graph features concatenated to the input embeddings. Tables 11 and 12 compare the different iN2V setups when concatenating the embeddings with the original graph features. + +![](images/1e0a5c6c61289e339f1d9d175cc78d8e02ac727c554e75c1b325c724b0a105da.jpg) +(a) Cora + +![](images/5f0ea29c7575a94ec421b85b4bce69e02971e00a06918c94dffee32cfe0b0331.jpg) +(b) Photo +Figure 6. Sensitivity to the N2V $p$ and $q$ hyperparameters. + +![](images/924a57795d3cc024d21c887f55ff57228bc03a71a02b010dd18898867157593d.jpg) +(c) WikiCS + +# D. Using other GNNs + +The effectiveness of iN2V is not limited by the chosen GNN. To demonstrate this, we additionally performed all experiments with GAT (Velickovic et al., 2018), see Tables 13, 15, 17, and 19 and with GIN (Xu et al., 2019), see Tables 14, 16, 18, and 20. For GAT, we set the number of attention heads to 8. Otherwise, we used the same hyperparameter tuning procedure for both GAT and GIN as for GraphSAGE. + +The results for GIN and GAT are in line with the GraphSAGE results. iN2V outperforms all baselines for most splits on all datasets except Actor, where all N2V-based results amount to guessing the largest class. Averaging the iN2V results over all datasets, splits, and whether to concatenate the iN2V embedding with the original graph features, MLP remains the best model with an average accuracy of 69.37 points. GraphSAGE reaches an average of 69.25 points, GIN of 69.08 points, and GAT of 68.64 points. This shows that an MLP can outperform classical GNNs without distillation or contrastive learning if enough structure information is provided with the features. + +When considering all four GNNs, iN2V outperforms FP by 0.8 points on homophilic and 0.3 points on heterophilic datasets. iN2V outperforms FP by 1.3 points when using MLP as a classification model and by 0.4 points when using message-passing GNNs. iN2V outperforms FP by 0.9 points when using only trained embeddings and by 0.3 points when using both trained embeddings and the original graph features as input. Finally, for the $10\%$ and $20\%$ training splits, iN2V leads over FP by 0.9 points in contrast to the 0.4 point lead for the $60\%$ and $80\%$ training splits. + +Table 5. Comparison of best iN2V variant vs baselines; MLP accuracy. Gray numbers are not directly comparable as they use additional information (graph features/transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)30.041.0630.071.1530.121.0529.671.0430.702.14
Feature Propagation78.541.4281.111.1782.381.0183.761.0085.132.27
iN2V (own)79.841.2781.701.1983.931.1684.690.9585.572.22
Original features66.981.7071.341.7975.941.3878.781.1179.231.95
N2V (transductive)78.441.1580.841.4982.821.1083.751.2083.101.86
CiteSeer
N2V (inductive)19.811.7521.431.0021.111.2721.101.4619.823.05
Feature Propagation56.762.2160.141.5865.731.3768.541.5372.162.65
iN2V (own)57.881.3561.671.6966.471.9170.621.3773.271.73
Original features66.601.5670.531.0473.231.1774.561.0475.202.10
N2V (transductive)56.432.0160.651.8164.771.7768.801.6970.091.32
Pubmed
N2V (inductive)39.450.6939.690.7740.150.4240.010.7139.301.06
Feature Propagation78.440.4478.990.4981.020.7381.910.6482.560.95
iN2V (own)80.420.4481.770.4082.780.5883.190.4483.260.87
Original features84.110.2186.050.4087.720.4088.630.6089.100.63
N2V (transductive)81.030.5181.760.5282.720.3982.960.5183.170.76
Computers
N2V (inductive)37.390.3237.330.2837.310.6637.510.7537.720.87
Feature Propagation81.900.9386.160.6689.200.3590.270.6090.710.61
iN2V (own)88.700.4790.040.5191.100.4791.510.4891.550.52
Original features82.150.6384.080.6585.570.5386.000.7786.560.58
N2V (transductive)88.580.5489.630.3990.180.4190.510.3590.390.57
Photo
N2V (inductive)25.510.4624.861.3023.661.9922.961.9523.152.24
Feature Propagation87.370.8589.380.5891.490.5192.250.5392.640.77
iN2V (own)90.700.5091.740.3692.600.5592.860.7293.070.68
Original features89.320.5990.850.5591.960.5592.410.6092.920.87
N2V (transductive)91.230.4491.880.4792.670.5092.730.6192.440.67
WikiCS
N2V (inductive)22.990.4122.960.4923.050.5123.120.7622.821.26
Feature Propagation73.591.3675.831.5378.830.8280.580.5780.970.87
iN2V (own)79.210.5980.370.8081.470.5282.120.6882.760.83
Original features76.850.6478.760.6680.430.6481.560.7882.381.06
N2V (transductive)79.230.6880.510.5981.470.5681.860.5381.920.66
Actor
N2V (inductive)25.411.3925.551.1625.691.0826.050.5126.570.79
Feature Propagation25.561.0425.150.9425.830.6525.180.5325.001.05
iN2V (own)25.780.7625.450.8725.001.3525.140.7625.512.00
Original features35.130.5136.200.6737.790.6238.750.8637.871.53
N2V (transductive)25.081.0425.340.9925.511.0125.120.8923.921.12
Amazon-ratings
N2V (inductive)36.910.3036.930.3536.740.3736.780.5236.540.89
Feature Propagation39.700.6042.180.8144.900.6246.240.7646.320.96
iN2V (own)41.410.8743.750.5446.520.7751.660.3151.930.78
Original features37.800.5141.770.6147.200.5650.820.7754.000.69
N2V (transductive)41.970.5042.710.4845.570.4746.580.6047.970.89
Roman-empire
N2V (inductive)13.341.2513.550.9714.000.2813.990.4013.271.67
Feature Propagation13.830.2513.930.2613.900.2913.880.5213.530.78
iN2V (own)13.890.2613.900.2713.790.2414.230.6914.270.94
Original features63.460.4365.050.3166.470.4166.830.7166.551.02
N2V (transductive)13.800.2613.740.3913.950.2513.980.4913.270.90
+ +Table 6. Comparison of best iN2V variant vs baselines; GraphSAGE accuracy. Gray numbers are not directly comparable as they use additional information (graph features/transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)42.183.5259.914.1575.071.9281.311.5684.501.44
Feature Propagation77.912.6279.482.1981.031.8584.001.0084.132.35
iN2V (own)78.881.4580.941.5883.301.0983.731.1384.462.08
Original features75.272.6383.371.1786.231.7786.990.8687.051.20
N2V (transductive)79.251.4581.661.2983.810.9585.240.8386.011.99
Citeseeer
N2V (inductive)34.173.4342.762.1556.123.2463.612.5768.892.61
Feature Propagation56.851.8460.532.0363.032.2166.512.1869.761.82
iN2V (own)57.880.9160.781.7863.021.8866.811.4268.922.07
Original features69.851.4272.860.9874.931.5676.391.0576.821.77
N2V (transductive)57.141.5161.331.0966.451.6070.141.2672.762.54
Pubmed
N2V (inductive)66.025.5774.732.3679.841.7881.850.6182.610.52
Feature Propagation76.370.6277.720.5080.740.7282.000.5482.430.86
iN2V (own)79.930.5080.800.4682.140.4382.810.4682.590.63
Original features85.990.4786.990.2888.320.4889.250.4489.850.56
N2V (transductive)81.360.4982.200.5083.220.3483.570.5083.660.65
Computers
N2V (inductive)77.642.8184.440.8487.180.7789.210.4289.350.70
Feature Propagation82.790.6486.430.6389.400.4590.430.4090.870.64
iN2V (own)88.360.5889.670.4090.840.3791.080.3691.380.51
Original features87.520.4889.760.4091.120.2091.490.5891.500.48
N2V (transductive)89.180.3890.160.4490.770.3991.190.3991.160.56
Photo
N2V (inductive)85.481.2887.731.4090.980.7191.920.6392.210.96
Feature Propagation87.431.0990.140.3491.570.4292.480.5192.950.79
iN2V (own)90.510.7291.700.4492.370.4692.820.6393.080.77
Original features93.740.4294.590.3795.270.3895.520.5295.590.77
N2V (transductive)91.290.4192.290.3092.900.4593.010.6793.330.71
WikiCS
N2V (inductive)67.782.6874.221.6578.210.6980.400.4581.620.86
Feature Propagation74.772.2778.041.0080.170.6881.030.6881.931.08
iN2V (own)78.910.6180.190.7081.280.6181.730.7682.371.01
Original features80.750.6482.560.8184.280.5585.210.6385.880.70
N2V (transductive)79.750.4180.930.6481.880.5582.300.6582.810.68
Actor
N2V (inductive)25.141.0925.560.9025.541.1525.421.0325.681.42
Feature Propagation25.221.2925.391.0625.140.5925.091.4225.051.34
iN2V (own)25.180.9725.500.7025.400.9925.160.9225.792.27
Original features31.770.7133.840.9136.480.5437.380.7336.711.23
N2V (transductive)25.500.7625.560.9525.410.7424.261.1824.551.70
Amazon-ratings
N2V (inductive)37.470.4740.690.6344.710.8546.980.6849.471.07
Feature Propagation38.970.7641.680.7645.720.7548.030.6750.041.42
iN2V (own)40.020.7942.010.4845.480.5750.390.4850.382.22
Original features39.201.0641.660.7048.070.5853.090.6957.340.97
N2V (transductive)41.820.6643.690.5246.310.7848.400.5549.800.94
Roman-empire
N2V (inductive)13.960.3514.100.3215.770.7615.981.5516.812.94
Feature Propagation13.231.4413.890.4415.490.6716.131.0421.971.25
iN2V (own)13.790.3913.860.2914.480.6315.331.1218.551.23
Original features66.090.7570.280.6374.410.4778.560.5082.901.09
N2V (transductive)13.820.2213.860.3715.351.5321.460.4627.461.36
+ +Table 7. Comparison of N2V vs different iN2V setups, MLP accuracy. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)30.041.0630.071.1530.121.0529.671.0430.702.14
frozen (\( \lambda = 1 \))74.641.7576.711.6079.571.2481.751.6383.511.79
post-hoc79.841.2780.281.8083.931.1684.001.0385.572.46
p-h w losses79.381.4281.071.5883.991.0683.751.5185.242.33
p-h w sampling79.571.3481.701.1982.710.8484.690.9585.572.22
Citeseer
N2V (inductive)19.811.7521.431.0021.111.2721.101.4619.823.05
frozen (\( \lambda = 1 \))54.192.7959.751.6365.061.6968.951.2771.591.90
post-hoc55.912.2661.511.3766.471.9170.621.3772.551.77
p-h w losses58.021.2961.671.6966.561.9070.691.3073.092.32
p-h w sampling57.881.3561.401.2166.621.7870.211.5973.271.73
Pubmed
N2V (inductive)39.450.6939.690.7740.150.4240.010.7139.301.06
frozen (\( \lambda = 1 \))76.610.7478.430.7580.490.6881.450.6982.080.79
post-hoc80.470.3581.770.4082.690.4583.420.5683.300.60
p-h w losses80.420.4481.680.3982.740.4783.450.4983.260.87
p-h w sampling80.420.4681.570.3882.780.5883.190.4483.350.59
Computers
N2V (inductive)37.390.3237.330.2837.310.6637.510.7537.720.87
frozen (\( \lambda = 1 \))82.250.7384.660.5087.120.5388.130.7088.810.97
post-hoc88.060.4689.870.3990.950.4090.900.5691.320.53
p-h w losses88.190.4290.040.5191.100.4791.550.3891.350.60
p-h w sampling88.700.4789.690.5090.750.5791.510.4891.550.52
Photo
N2V (inductive)25.510.4624.861.3023.661.9922.961.9523.152.24
frozen (\( \lambda = 1 \))86.080.9787.720.6890.010.7890.950.5691.560.88
post-hoc90.420.5791.710.3692.470.5093.080.7393.070.68
p-h w losses90.700.5091.740.3692.600.5592.860.7293.120.87
p-h w sampling90.380.5491.900.4492.470.6492.700.7993.111.02
WikiCS
N2V (inductive)22.990.4122.960.4923.050.5123.120.7622.821.26
frozen (\( \lambda = 1 \))70.642.0171.891.3075.480.9877.270.5478.621.65
post-hoc78.920.6980.160.7881.360.6382.030.5782.560.79
p-h w losses78.980.6480.220.8181.470.5282.120.6882.760.83
p-h w sampling79.210.5980.370.8081.480.6682.180.7782.600.83
Actor
N2V (inductive)25.411.3925.551.1625.691.0826.050.5126.570.79
frozen (\( \lambda = 1 \))25.780.7625.450.8725.001.3524.620.9425.030.99
post-hoc25.431.3525.780.7824.840.9724.931.0725.831.50
p-h w losses25.421.1125.760.9224.750.8725.450.4025.512.00
p-h w sampling24.961.2525.820.8325.620.8025.140.7624.431.97
Amazon-ratings
N2V (inductive)36.910.3036.930.3536.740.3736.780.5236.540.89
frozen (\( \lambda = 1 \))39.040.6541.311.0144.441.6245.770.4446.740.81
post-hoc41.330.6643.890.5046.520.7749.440.7849.781.70
p-h w losses41.430.7943.690.5746.680.9451.660.3151.930.78
p-h w sampling41.410.8743.750.5446.581.0249.280.6449.060.67
Roman-empire
N2V (inductive)13.341.2513.550.9714.000.2813.990.4013.271.67
frozen (\( \lambda = 1 \))13.890.2613.900.2713.900.1913.740.6213.640.76
post-hoc13.900.2613.850.3014.010.2414.230.6914.270.94
p-h w losses13.810.3613.680.3313.790.2413.970.4014.160.80
p-h w sampling13.850.2913.840.2813.880.1813.960.5513.610.81
+ +Table 8. Comparison of N2V vs different iN2V setups, GraphSAGE accuracy. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)42.183.5259.914.1575.071.9281.311.5684.501.44
frozen (\( \lambda = 1 \))74.762.0876.861.6680.751.1782.452.1484.131.99
post-hoc79.201.8079.441.5583.451.0883.731.1384.462.08
p-h w losses78.881.4580.501.2683.301.0983.391.7484.691.65
p-h w sampling79.111.7780.941.5882.021.3483.431.5084.021.88
Citeuser
N2V (inductive)34.173.4342.762.1556.123.2463.612.5768.892.61
frozen (\( \lambda = 1 \))56.221.7259.002.1663.631.4767.731.6368.922.26
post-hoc57.251.2760.591.5262.881.8066.811.4269.132.33
p-h w losses57.251.5560.781.7862.761.9067.232.1468.922.07
p-h w sampling57.880.9157.923.4763.021.8867.231.8168.832.37
Pubmed
N2V (inductive)66.025.5774.732.3679.841.7881.850.6182.610.52
frozen (\( \lambda = 1 \))75.990.5278.371.0281.310.5582.300.8182.880.72
post-hoc79.930.5080.800.4681.800.4982.740.2982.900.73
p-h w losses79.750.6180.750.6181.910.5482.710.3482.590.63
p-h w sampling79.680.5580.590.5482.140.4382.810.4682.940.60
Computers
N2V (inductive)77.642.8184.440.8487.180.7789.210.4289.350.70
frozen (\( \lambda = 1 \))85.400.5487.450.4289.050.5690.090.4490.240.59
post-hoc87.940.4389.620.4390.660.3890.710.5291.070.64
p-h w losses87.840.3289.670.4090.840.3791.080.3691.240.52
p-h w sampling88.360.5889.260.5090.530.6291.160.4791.380.51
Photo
N2V (inductive)85.481.2887.731.4090.980.7191.920.6392.210.96
frozen (\( \lambda = 1 \))88.640.9590.310.5091.780.4792.480.5992.760.78
post-hoc90.490.6791.620.3592.400.5392.930.7493.080.77
p-h w losses90.510.7291.670.3092.370.4692.740.6192.951.00
p-h w sampling90.420.5491.700.4492.500.5092.820.6392.880.81
WikiCS
N2V (inductive)67.782.6874.221.6578.210.6980.400.4581.620.86
frozen (\( \lambda = 1 \))75.620.6377.960.5679.830.6080.850.6381.550.58
post-hoc78.780.7480.070.8881.010.6381.840.3982.550.72
p-h w losses78.720.5580.060.7181.200.4881.730.7682.430.69
p-h w sampling78.910.6180.190.7081.280.6181.820.5482.371.01
Actor
N2V (inductive)25.141.0925.560.9025.541.1525.421.0325.681.42
frozen (\( \lambda = 1 \))25.180.9722.940.8325.400.9924.930.8924.971.10
post-hoc25.401.0825.820.9925.340.8524.890.7525.391.18
p-h w losses25.061.0624.950.7825.850.7924.411.2725.792.27
p-h w sampling25.391.1425.500.7025.710.9325.160.9224.171.69
Amazon-ratings
N2V (inductive)37.470.4740.690.6344.710.8546.980.6849.471.07
frozen (\( \lambda = 1 \))36.701.5141.760.9145.480.5748.100.5850.191.10
post-hoc38.681.0042.120.6645.200.7047.930.2948.401.23
p-h w losses40.020.7942.010.4845.460.8250.390.4850.382.22
p-h w sampling40.100.6841.920.8545.310.7447.440.4648.401.30
Roman-empire
N2V (inductive)13.960.3514.100.3215.770.7615.981.5516.812.94
frozen (\( \lambda = 1 \))13.640.8513.880.2914.480.6314.821.0417.594.35
post-hoc13.311.2613.860.2913.860.1713.860.6414.891.70
p-h w losses13.171.4313.840.2713.900.2615.051.0716.931.31
p-h w sampling13.790.3913.590.7514.020.4115.331.1218.551.23
+ +Table 9. Comparison of best iN2V variant vs baselines concatenated with the original graph features; MLP accuracy. Gray numbers are not directly comparable as they use additional information (transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)66.481.6870.541.2075.471.9176.092.0876.271.68
Feature Propagation82.241.0683.491.2683.831.4985.591.2485.541.48
iN2V (own)82.011.3883.971.3585.831.4585.811.7786.491.62
Original features66.981.7071.341.7975.941.3878.781.1179.231.95
N2V (transductive)81.941.3983.521.1385.991.1086.991.4486.131.69
CiteSeer
N2V (inductive)66.061.3170.041.2972.331.0773.581.1474.682.12
Feature Propagation68.481.2072.131.0874.371.2675.290.9776.341.66
iN2V (own)66.981.1471.350.8374.131.2375.850.8777.092.79
Original features66.601.5670.531.0473.231.1774.561.0475.202.10
N2V (transductive)69.400.8872.581.2874.780.9375.941.0377.272.34
Pubmed
N2V (inductive)83.940.4285.550.4286.730.4687.210.5887.400.92
Feature Propagation85.590.2687.060.4188.710.4989.330.3989.950.60
iN2V (own)85.380.3987.070.3988.690.6089.460.5189.900.51
Original features84.110.2186.050.4087.720.4088.630.6089.100.63
N2V (transductive)86.160.2887.950.4788.730.3789.420.4789.890.59
Computers
N2V (inductive)80.410.6082.980.6584.080.7684.340.9784.720.87
Feature Propagation85.390.6787.030.3389.140.4089.910.7090.150.78
iN2V (own)88.630.2289.750.2290.670.2490.990.5290.950.59
Original features82.150.6384.080.6585.570.5386.000.7786.560.58
N2V (transductive)89.360.3990.610.2391.190.3991.470.4091.300.47
Photo
N2V (inductive)85.732.2086.221.2290.710.5990.900.6591.760.70
Feature Propagation92.750.4493.930.2894.730.4094.750.5195.310.62
iN2V (own)93.990.3694.550.2995.330.3295.420.4595.700.74
Original features89.320.5990.850.5591.960.5592.410.6092.920.87
N2V (transductive)94.070.4694.930.2995.360.2895.730.5195.630.53
WikiCS
N2V (inductive)73.161.1775.600.7876.790.5177.300.7376.710.97
Feature Propagation78.141.2380.580.7082.910.5383.981.0184.781.15
iN2V (own)81.780.5083.470.4784.770.5585.410.7686.260.87
Original features76.850.6478.760.6680.430.6481.560.7882.381.06
N2V (transductive)82.260.5583.730.6584.700.4185.440.4985.500.77
Actor
N2V (inductive)33.990.9335.541.0136.640.8538.220.7038.081.62
Feature Propagation33.910.7235.940.4436.630.7638.220.7237.431.16
iN2V (own)35.151.0635.920.5637.660.9637.720.8938.071.12
Original features35.130.5136.200.6737.790.6238.750.8637.871.53
N2V (transductive)32.350.9434.750.8336.351.0137.301.5237.831.43
Amazon-ratings
N2V (inductive)37.360.3937.930.5938.300.5038.510.6938.571.67
Feature Propagation39.860.7242.921.0747.890.9550.900.7053.071.18
iN2V (own)41.240.7445.640.6751.910.4055.960.8158.440.67
Original features37.800.5141.770.6147.200.5650.820.7754.000.69
N2V (transductive)41.730.6645.420.5350.920.5853.930.5256.781.45
Roman-empire
N2V (inductive)62.370.5164.350.3865.670.1866.480.7066.541.07
Feature Propagation62.020.4164.160.3965.810.3365.960.5166.341.11
iN2V (own)60.570.3963.570.3065.360.3565.830.8065.991.12
Original features63.460.4365.050.3166.470.4166.830.7166.551.02
N2V (transductive)60.280.2763.540.2865.090.3765.990.6266.321.23
+ +Table 10. Comparison of best iN2V variant vs baselines concatenated with the original graph features; GraphSAGE accuracy. Gray numbers are not directly comparable as they use additional information (transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)73.973.3581.921.8485.971.6786.970.9687.380.79
Feature Propagation80.921.2983.491.5485.331.4487.321.0787.751.12
iN2V (own)81.011.4283.281.4185.921.1786.771.4287.601.81
Original features75.272.6383.371.1786.231.7786.990.8687.051.20
N2V (transductive)82.430.7784.451.2386.501.0486.851.1688.301.47
CiteSeer
N2V (inductive)68.522.4772.491.1474.531.2875.851.4076.761.85
Feature Propagation68.001.5871.721.1574.770.9776.390.9976.461.73
iN2V (own)67.111.4771.140.6073.991.4775.771.2975.501.63
Original features69.851.4272.860.9874.931.5676.391.0576.821.77
N2V (transductive)71.081.0473.211.1274.760.8176.141.4376.821.77
Pubmed
N2V (inductive)84.330.4686.060.5987.320.3887.320.3988.010.78
Feature Propagation82.990.6485.460.4686.740.5587.970.4088.960.63
iN2V (own)83.020.7285.090.3787.510.5188.590.3589.390.72
Original features85.950.4786.990.2888.320.4889.250.4489.850.56
N2V (transductive)85.360.4587.280.4687.120.4488.420.3689.020.57
Computers
N2V (inductive)86.641.1189.050.6390.720.3691.330.5691.700.55
Feature Propagation87.770.5789.930.3991.300.3491.850.5292.130.45
iN2V (own)88.900.3690.500.2291.430.3691.910.4991.720.37
Original features87.520.4889.760.4091.120.2091.490.5891.500.48
N2V (transductive)90.480.3791.590.3692.090.3692.350.3692.200.37
Photo
N2V (inductive)91.971.7393.870.7094.940.4495.370.6095.560.81
Feature Propagation93.290.5494.400.4195.110.4195.540.5495.730.77
iN2V (own)93.870.5294.680.3395.260.2895.590.4295.630.74
Original features93.740.4294.590.3795.270.3895.520.5295.590.77
N2V (transductive)94.490.4694.970.2895.500.2295.690.3295.791.00
WikiCS
N2V (inductive)76.501.3479.990.6582.230.7583.760.6384.790.87
Feature Propagation79.580.8882.090.8684.100.6384.950.7785.770.76
iN2V (own)81.500.6483.380.5884.620.4785.250.4785.730.70
Original features80.750.6482.560.8184.280.5585.210.6385.880.70
N2V (transductive)81.910.5283.370.6784.660.6485.560.6186.100.70
Actor
N2V (inductive)32.040.9433.440.7335.140.9336.131.5236.052.46
Feature Propagation32.510.5133.080.7334.891.4036.190.8836.251.13
iN2V (own)32.120.7333.530.6335.381.2436.030.6436.662.24
Original features31.770.7133.840.9136.480.5437.380.7336.711.23
N2V (transductive)30.821.1232.520.7134.460.7535.451.1836.542.27
Amazon-ratings
N2V (inductive)39.140.5741.430.6544.991.0448.971.1650.981.35
Feature Propagation39.500.9842.930.8248.310.6751.280.6353.901.14
iN2V (own)40.290.6743.720.7050.990.8455.420.8358.700.84
Original features39.201.0641.660.7048.070.5853.090.6957.340.97
N2V (transductive)42.380.5346.310.7251.600.7654.841.1157.051.31
Roman-empire
N2V (inductive)64.710.6668.520.6572.780.3176.680.6481.161.65
Feature Propagation62.250.4667.030.4471.740.2576.020.7580.310.97
iN2V (own)59.290.8766.190.5971.500.4376.070.4980.480.75
Original features66.090.7570.280.6374.410.4778.560.5082.901.09
N2V (transductive)66.950.7771.660.4778.390.4682.410.4884.670.95
+ +Table 11. MLP on different iN2V setups concatenated with the original graph features. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)66.481.6870.541.2075.471.9176.092.0876.271.68
frozen (λ = 1)76.831.6679.701.4482.761.8785.311.8585.062.20
post-hoc82.011.3882.531.5885.811.3885.631.7387.122.08
p-h w losses81.851.4283.271.2185.831.4586.271.9786.681.66
p-h w sampling81.721.2683.971.3585.371.3585.811.7786.491.62
Citeuser
N2V (inductive)66.061.3170.041.2972.331.0773.581.1474.682.12
frozen (λ = 1)65.591.5270.190.9773.531.2374.350.9676.491.76
post-hoc66.751.0171.241.0074.131.2375.850.8776.402.23
p-h w losses66.981.1471.350.8374.371.0575.701.5376.492.44
p-h w sampling67.331.3370.351.1174.301.0375.921.1077.092.79
Pubmed
N2V (inductive)83.940.4285.550.4286.730.4687.210.5887.400.92
frozen (λ = 1)83.400.6286.170.4288.080.5288.860.6388.970.79
post-hoc85.340.4087.050.4588.010.4589.440.5190.010.71
p-h w losses85.370.4087.070.3988.150.4989.290.4489.620.68
p-h w sampling85.380.3986.850.5188.690.6089.460.5189.900.51
Computers
N2V (inductive)80.410.6082.980.6584.080.7684.340.9784.720.87
frozen (λ = 1)86.090.5387.990.3689.470.3890.210.4390.190.64
post-hoc88.320.2989.570.3690.460.4490.370.5490.640.82
p-h w losses88.340.1689.630.3490.400.3990.680.5390.890.74
p-h w sampling88.630.2289.750.2290.670.2490.990.5290.950.59
Photo
N2V (inductive)85.732.2086.221.2290.710.5990.900.6591.760.70
frozen (λ = 1)92.130.3993.660.6494.590.3795.100.4995.320.74
post-hoc93.860.3994.540.3795.240.4095.310.4595.480.69
p-h w losses93.850.4694.590.3095.210.3895.280.3895.690.64
p-h w sampling93.990.3694.550.2995.330.3295.420.4595.700.74
WikiCS
N2V (inductive)73.161.1775.600.7876.790.5177.300.7376.710.97
frozen (λ = 1)77.520.8179.410.7681.460.8282.940.6783.871.02
post-hoc81.820.6983.340.6384.370.5685.270.4985.620.69
p-h w losses81.880.5083.370.5984.770.5585.410.7686.050.93
p-h w sampling81.780.5083.470.4784.540.4885.450.5486.260.87
Actor
N2V (inductive)33.990.9335.541.0136.640.8538.220.7038.081.62
frozen (λ = 1)33.071.0034.320.9736.571.1837.910.7637.801.56
post-hoc34.960.7836.160.5337.131.0837.751.1038.111.18
p-h w losses34.970.5435.950.3937.401.1937.531.0237.321.22
p-h w sampling35.151.0635.920.5637.660.9637.720.8938.071.12
Amazon-ratings
N2V (inductive)37.360.3937.930.5938.300.5038.510.6938.571.67
frozen (λ = 1)39.490.7841.981.1648.632.8449.490.8051.991.25
post-hoc41.240.7445.640.6751.890.5355.900.7758.690.44
p-h w losses41.110.6645.560.9652.080.4955.960.8158.440.67
p-h w sampling41.260.6945.730.6751.910.4055.930.6851.691.60
Roman-empire
N2V (inductive)62.370.5164.350.3865.670.1866.480.7066.541.07
frozen (λ = 1)60.570.4163.640.3165.360.3565.980.6766.111.17
post-hoc60.330.3363.490.3365.070.3965.830.8065.971.23
p-h w losses60.570.3963.590.3265.280.3465.810.5665.971.32
p-h w sampling60.040.3763.570.3065.160.3665.860.5965.991.12
+ +Table 12. GraphSAGE on different iN2V setups concatenated with the original graph features. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)73.973.3581.921.8485.971.6786.970.9687.380.79
frozen (λ = 1)76.561.7281.511.7785.121.3886.621.8787.601.81
post-hoc81.011.4281.981.7586.131.0886.551.3586.641.26
p-h w losses81.451.3982.821.2085.921.1786.351.5687.451.39
p-h w sampling80.971.4783.281.4185.711.4486.771.4287.641.27
Citeuser
N2V (inductive)68.522.4772.491.1474.531.2875.851.4076.761.85
frozen (λ = 1)65.621.8570.691.1073.991.4775.771.2977.091.77
post-hoc66.791.5171.140.6073.911.3876.351.4275.501.63
p-h w losses67.111.4771.140.9873.511.8175.941.4176.552.79
p-h w sampling66.691.3469.101.6073.471.5475.731.2776.132.14
Pubmed
N2V (inductive)84.330.4686.060.5987.320.3887.320.3988.010.78
frozen (λ = 1)81.230.6984.860.5886.740.6087.920.4788.380.88
post-hoc82.830.7784.760.5386.780.5688.690.4989.230.69
p-h w losses82.750.7185.090.3786.680.4888.360.4789.070.69
p-h w sampling83.020.7284.820.4687.510.5188.590.3589.390.72
Computers
N2V (inductive)86.641.1189.050.6390.720.3691.330.5691.700.55
frozen (λ = 1)88.520.5190.170.3191.160.3591.630.3091.600.52
post-hoc88.830.5190.500.2491.470.3991.800.5992.010.35
p-h w losses88.830.3490.460.1991.430.3691.900.3891.850.61
p-h w sampling88.900.3690.500.2291.470.3091.910.4991.720.37
Photo
N2V (inductive)91.971.7393.870.7094.940.4495.370.6095.560.81
frozen (λ = 1)93.040.5194.290.4795.250.3095.540.5095.630.75
post-hoc93.740.4194.530.4595.230.3195.520.3995.730.61
p-h w losses93.540.4994.560.3395.140.4295.440.4995.740.85
p-h w sampling93.870.5294.680.3395.260.2895.590.4295.630.74
WikiCS
N2V (inductive)76.501.3479.990.6582.230.7583.760.6384.790.87
frozen (λ = 1)79.290.9881.620.5183.290.7184.440.6585.380.45
post-hoc81.500.6483.300.6284.410.5185.430.4885.970.78
p-h w losses81.540.6883.300.5984.470.6485.250.4786.000.90
p-h w sampling81.360.5283.380.5884.620.4785.290.6685.730.70
Actor
N2V (inductive)32.040.9433.440.7335.140.9336.131.5236.052.46
frozen (λ = 1)31.290.5832.750.4634.780.7736.231.1436.461.84
post-hoc32.120.7333.180.4534.890.7536.041.0636.662.24
p-h w losses32.820.7433.830.9135.820.8635.210.7536.762.21
p-h w sampling32.791.0333.530.6335.381.2436.030.6436.581.50
Amazon-ratings
N2V (inductive)39.140.5741.430.6544.991.0448.971.1650.981.35
frozen (λ = 1)38.760.6942.280.5448.161.5150.020.5954.051.04
post-hoc39.930.4843.450.9450.700.5155.120.8058.700.84
p-h w losses40.290.6743.720.7050.630.9055.290.6558.940.89
p-h w sampling40.130.6043.911.0250.990.8455.420.8349.541.30
Roman-empire
N2V (inductive)64.710.6668.520.6572.780.3176.680.6481.161.65
frozen (λ = 1)57.861.2165.400.5470.930.3975.480.5780.580.71
post-hoc57.941.3265.570.4471.500.4376.070.4980.480.75
p-h w losses59.290.8766.100.3170.450.3875.620.6479.920.96
p-h w sampling55.561.3066.190.5970.880.3875.150.8479.910.71
+ +Table 13. Comparison of best iN2V variant vs baselines; GAT accuracy. Gray numbers are not directly comparable as they use additional information (graph features/transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)69.165.0277.141.5481.971.6783.471.4285.201.53
Feature Propagation79.101.0281.901.5383.021.7584.301.2285.312.43
iN2V (own)79.701.2981.741.2783.901.1484.581.0585.682.04
Original features81.351.8384.381.1386.511.1988.210.8488.151.13
N2V (transductive)79.201.5381.891.1784.161.3884.980.8785.941.67
Citeuser
N2V (inductive)51.543.2458.892.6065.811.8370.121.5173.032.83
Feature Propagation56.732.0362.291.3066.421.4070.471.1072.792.02
iN2V (own)58.130.9961.741.5367.121.3071.020.9073.421.91
Original features71.641.3073.711.2475.941.1876.781.3477.362.04
N2V (transductive)57.801.2760.981.0366.761.3369.981.1872.612.47
Pubmed
N2V (inductive)72.715.5377.352.1881.740.4982.830.5583.020.74
Feature Propagation79.300.4780.310.5681.870.3782.930.5983.060.79
iN2V (own)80.650.4981.800.5182.770.5383.170.4383.340.72
Original features85.020.4285.790.4287.470.4588.210.4288.850.42
N2V (transductive)81.320.5382.090.4983.210.4183.460.4783.500.81
Computers
N2V (inductive)83.442.6187.800.6890.400.4990.920.5091.180.68
Feature Propagation83.091.1986.660.6689.850.5190.750.5391.280.70
iN2V (own)87.800.4189.190.4890.600.5591.060.5991.080.60
Original features87.440.5890.170.2091.380.2991.890.2292.470.56
N2V (transductive)89.290.3290.350.3191.010.3291.330.4091.330.68
Photo
N2V (inductive)86.951.7888.802.4992.520.6793.090.3693.540.87
Feature Propagation87.181.2890.210.6192.100.4293.050.5093.400.77
iN2V (own)90.460.5891.260.5092.710.5093.310.8893.180.81
Original features93.470.6094.420.5995.020.5895.370.2695.570.86
N2V (transductive)91.370.4992.500.2793.180.4493.690.4993.360.49
WikiCS
N2V (inductive)71.582.6076.911.2679.960.7380.871.0882.080.87
Feature Propagation73.423.4977.891.1280.110.6981.270.7582.221.20
iN2V (own)78.210.5779.630.8280.370.6381.560.8082.110.67
Original features80.950.8282.640.8584.100.4684.610.7984.950.62
N2V (transductive)79.510.6680.650.5381.960.5982.250.7182.860.96
Actor
N2V (inductive)25.381.1125.210.8526.090.9625.660.6325.281.86
Feature Propagation24.971.3825.630.8525.180.9825.530.6225.071.52
iN2V (own)25.530.9525.890.9424.800.9324.980.8425.201.62
Original features30.820.8632.230.8732.960.9633.721.0334.391.51
N2V (transductive)25.200.8125.690.6124.501.3724.301.5324.500.71
Amazon-ratings
N2V (inductive)39.330.7143.060.5845.920.6848.580.4350.270.62
Feature Propagation40.970.8442.920.4646.440.7748.870.5451.211.12
iN2V (own)41.490.7643.580.5146.350.5949.220.8151.561.48
Original features40.670.5642.480.6646.060.5348.750.6050.641.70
N2V (transductive)41.780.6443.920.8947.670.7949.850.8151.571.21
Roman-empire
N2V (inductive)13.860.2613.930.2414.300.4014.481.1515.901.45
Feature Propagation13.880.2513.940.2513.940.2214.320.7516.040.85
iN2V (own)13.910.1913.940.2214.210.3716.050.5317.491.05
Original features52.230.8259.460.8068.980.6074.950.4378.891.37
N2V (transductive)13.750.1913.630.4314.010.3214.130.7213.720.66
+ +Table 14. Comparison of best iN2V variant vs baselines; GIN accuracy. Gray numbers are not directly comparable as they use additional information (graph features/transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)68.382.6374.482.5581.761.1083.971.3985.201.92
Feature Propagation78.271.3781.021.5682.401.7284.321.1085.131.98
iN2V (own)79.581.1681.211.2484.031.2584.911.1085.061.67
Original features82.401.0884.050.7986.701.3988.081.0587.340.95
N2V (transductive)79.091.3581.561.3583.231.2685.310.8586.091.85
CiteSeer
N2V (inductive)50.402.6058.061.7564.832.1969.312.0872.911.88
Feature Propagation56.452.4160.722.1465.971.6170.481.3173.301.73
iN2V (own)57.691.2561.242.1567.011.6570.741.4872.642.21
Original features71.660.8673.741.1975.441.0876.390.7777.181.80
N2V (transductive)57.311.5861.861.1166.491.1770.481.0573.242.39
Pubmed
N2V (inductive)76.221.2879.781.0281.910.5282.730.4783.070.82
Feature Propagation78.450.5579.190.5181.530.5982.710.4283.090.89
iN2V (own)80.490.5181.600.5382.640.5783.280.5583.070.61
Original features84.940.2786.030.4387.310.3888.080.5088.470.59
N2V (transductive)81.310.4882.210.4583.230.4183.470.5783.460.69
Computers
N2V (inductive)86.061.2488.860.5790.220.4491.000.5291.360.70
Feature Propagation84.340.9787.820.4789.610.6190.940.3991.080.36
iN2V (own)88.430.5789.630.4290.710.5891.320.5491.450.36
Original features87.570.3489.470.2690.970.3391.430.4191.850.45
N2V (transductive)88.970.3189.950.7190.710.4191.220.4691.390.55
Photo
N2V (inductive)88.950.7690.500.6492.190.4893.130.6793.560.44
Feature Propagation87.681.1590.350.2992.070.5592.860.4193.200.98
iN2V (own)90.600.5591.530.4492.270.4592.810.3493.450.75
Original features93.420.6294.300.5495.050.3095.340.5195.700.58
N2V (transductive)91.150.5092.220.3892.930.4893.460.4993.591.00
WikiCS
N2V (inductive)76.681.3678.500.8479.850.6580.920.8081.940.96
Feature Propagation75.732.6778.431.1880.340.7880.980.8681.881.40
iN2V (own)78.510.6180.030.7880.960.5381.560.6882.690.86
Original features80.820.8982.400.8183.890.5584.730.8285.450.54
N2V (transductive)79.230.5680.560.7381.660.6382.300.6382.550.78
Actor
N2V (inductive)25.330.9725.841.0725.700.7624.911.0324.621.70
Feature Propagation25.200.9225.720.8825.590.5725.660.6825.291.17
iN2V (own)24.231.4625.481.1525.640.9025.200.7625.532.13
Original features30.240.7231.800.8832.970.6533.700.7835.501.91
N2V (transductive)25.281.2125.371.1424.150.6524.861.1424.990.71
Amazon-ratings
N2V (inductive)40.580.7743.560.3246.290.9449.840.4953.490.68
Feature Propagation40.740.7343.240.6946.850.9250.560.9153.830.93
iN2V (own)41.170.7143.320.6347.350.8450.930.7953.591.06
Original features39.870.6741.480.5245.300.6748.181.1752.181.28
N2V (transductive)41.770.5244.470.4548.340.6050.380.5052.450.79
Roman-empire
N2V (inductive)13.890.2413.820.8017.242.6422.571.5626.661.06
Feature Propagation13.890.2413.830.5516.582.0119.742.9924.871.90
iN2V (own)13.920.2713.980.3218.342.2820.322.6724.941.41
Original features58.010.3362.580.7469.230.4473.740.5576.290.69
N2V (transductive)19.611.6721.110.4624.471.1727.821.3631.552.46
+ +Table 15. Comparison of N2V vs different iN2V setups, GAT accuracy. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)69.165.0277.141.5481.971.6783.471.4285.201.53
frozen (λ = 1)75.961.8280.491.6182.911.1884.451.0885.061.62
post-hoc79.521.2581.151.5983.850.9984.211.3185.421.91
p-h w losses79.161.1681.261.2683.901.1484.351.5485.682.04
p-h w sampling79.701.2981.741.2783.440.6984.581.0585.351.65
Citeseer
N2V (inductive)51.543.2458.892.6065.811.8370.121.5173.032.83
frozen (λ = 1)57.131.2261.471.6966.181.8270.091.1473.242.15
post-hoc57.871.1961.381.3567.091.0070.721.6372.671.67
p-h w losses58.130.9961.741.5367.231.4570.181.4173.421.91
p-h w sampling58.280.9861.961.4767.121.3071.020.9072.971.74
Pubmed
N2V (inductive)72.715.5377.352.1881.740.4982.830.5583.020.74
frozen (λ = 1)79.250.6480.450.6382.110.4982.870.6683.210.68
post-hoc80.570.3381.870.5882.660.4583.270.3983.340.72
p-h w losses80.650.4981.800.5182.630.4783.170.4383.350.72
p-h w sampling80.710.5681.790.5382.770.5383.200.5183.090.84
Computers
N2V (inductive)83.442.6187.800.6890.400.4990.920.5091.180.68
frozen (λ = 1)86.020.7988.230.5590.000.4490.550.4791.050.56
post-hoc87.550.3689.200.3490.280.3990.890.4690.740.74
p-h w losses87.770.5089.190.4890.310.4090.970.4191.130.67
p-h w sampling87.800.4189.320.3490.600.5591.060.5991.080.60
Photo
N2V (inductive)86.951.7888.802.4992.520.6793.090.3693.540.87
frozen (λ = 1)89.240.7690.730.5592.410.5193.140.3993.620.72
post-hoc90.410.4591.260.5092.510.5193.110.7993.250.86
p-h w losses90.460.5891.530.4992.370.3293.060.5693.520.58
p-h w sampling90.190.6191.710.4292.710.5093.310.8893.180.81
WikiCS
N2V (inductive)71.582.6076.911.2679.960.7380.871.0882.080.87
frozen (λ = 1)76.511.0878.091.1779.560.5681.290.7482.040.77
post-hoc78.110.9779.450.8980.330.8081.090.6982.320.59
p-h w losses78.000.5979.550.9380.370.6381.120.8282.130.79
p-h w sampling78.210.5779.630.8280.590.4581.560.8082.110.67
Actor
N2V (inductive)25.381.1125.210.8526.090.9625.660.6325.281.86
frozen (λ = 1)25.230.9125.790.6824.800.9324.980.8425.201.62
post-hoc26.010.6526.030.5625.460.6824.970.6325.011.37
p-h w losses25.411.0625.890.9425.701.0124.910.6125.501.97
p-h w sampling25.530.9525.661.1425.531.1924.910.6923.991.74
Amazon-ratings
N2V (inductive)39.330.7143.060.5845.920.6848.580.4350.270.62
frozen (λ = 1)40.220.8943.300.4046.350.5949.220.8151.561.48
post-hoc41.480.7543.580.5145.850.5447.450.5748.001.23
p-h w losses41.490.7643.380.6145.750.6748.150.5549.231.25
p-h w sampling41.630.8243.490.6645.770.6247.190.7448.891.00
Roman-empire
N2V (inductive)13.860.2613.930.2414.300.4014.481.1515.901.45
frozen (λ = 1)13.910.1913.940.2213.880.3114.420.6916.860.75
post-hoc13.840.2513.820.2914.140.7516.050.5317.491.05
p-h w losses13.840.2813.860.2314.210.3714.160.3815.631.05
p-h w sampling13.890.2013.780.3413.900.2914.000.5615.650.89
+ +Table 16. Comparison of N2V vs different iN2V setups, GIN accuracy. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)68.382.6374.482.5581.761.1083.971.3985.201.92
frozen (λ = 1)75.572.0879.451.3682.231.2683.691.1984.582.36
post-hoc79.421.3780.661.6484.031.2584.101.5285.092.21
p-h w losses79.511.1081.211.4683.531.3083.711.7085.721.86
p-h w sampling79.581.1681.211.2483.150.7484.911.1085.061.67
Citeuser
N2V (inductive)50.402.6058.061.7564.832.1969.312.0872.911.88
frozen (λ = 1)56.231.3959.002.1365.571.5669.911.6472.852.20
post-hoc57.781.3061.421.6566.891.8270.741.4873.121.86
p-h w losses57.691.2561.242.1566.772.3670.870.8072.971.69
p-h w sampling57.751.2361.551.0767.011.6570.241.6272.642.21
Pubmed
N2V (inductive)76.221.2879.781.0281.910.5282.730.4783.070.82
frozen (λ = 1)78.150.5480.230.6381.730.3282.570.5082.790.81
post-hoc80.450.3481.600.5382.530.3883.390.3883.220.75
p-h w losses80.490.5181.530.4682.770.4683.210.4383.070.61
p-h w sampling80.420.4581.520.4582.640.5783.280.5583.100.59
Computers
N2V (inductive)86.061.2488.860.5790.220.4491.000.5291.360.70
frozen (λ = 1)86.620.4488.360.7189.960.6290.760.3491.180.75
post-hoc88.210.4289.780.3490.540.4490.880.5291.270.63
p-h w losses88.100.5289.630.4290.740.4391.120.4491.450.36
p-h w sampling88.430.5789.600.4590.710.5891.320.5491.450.72
Photo
N2V (inductive)88.950.7690.500.6492.190.4893.130.6793.560.44
frozen (λ = 1)89.450.7691.040.4792.400.4693.110.5493.450.75
post-hoc90.600.5491.600.4092.340.6592.750.7193.450.64
p-h w losses90.600.5591.580.3892.360.4492.810.3493.230.84
p-h w sampling90.590.5091.530.4492.270.4592.690.6493.020.78
WikiCS
N2V (inductive)76.681.3678.500.8479.850.6580.920.8081.940.96
frozen (λ = 1)77.170.8678.410.9880.140.6980.760.9682.260.78
post-hoc78.590.4579.980.8480.890.5181.910.8182.690.86
p-h w losses78.510.6179.910.7880.950.5081.560.6882.260.91
p-h w sampling78.620.5080.030.7880.960.5381.970.6882.670.78
Actor
N2V (inductive)25.330.9725.841.0725.700.7624.911.0324.621.70
frozen (λ = 1)25.710.7825.481.1525.640.9025.260.7625.321.21
post-hoc25.101.0925.531.0325.061.3425.181.3425.960.74
p-h w losses25.301.0625.761.0125.940.6925.401.1525.532.13
p-h w sampling24.231.4625.521.0025.720.9025.121.0126.490.74
Amazon-ratings
N2V (inductive)40.580.7743.560.3246.290.9449.840.4953.490.68
frozen (λ = 1)40.150.8143.420.5747.350.8450.930.7953.591.06
post-hoc41.260.5843.180.5945.720.7148.020.4248.330.99
p-h w losses41.170.7142.940.7845.840.3548.791.1250.191.28
p-h w sampling41.140.7743.320.6345.750.7547.180.4849.671.14
Roman-empire
N2V (inductive)13.890.2413.820.8017.242.6422.571.5626.661.06
frozen (λ = 1)13.890.2413.980.3217.362.3019.721.7424.941.41
post-hoc13.890.2413.960.3818.342.2819.222.7224.471.94
p-h w losses13.890.2414.170.5517.002.0520.322.6724.880.90
p-h w sampling13.920.2713.850.5018.071.6120.493.1824.721.79
+ +Table 17. Comparison of best iN2V variant vs baselines concatenated with the original graph features; GAT accuracy. Gray numbers are not directly comparable as they use additional information (transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)80.652,1483.121,2286.081,5287.251,0988.271,48
Feature Propagation81.561,1484.081,3786.491,3388.121,3687.561,35
iN2V (own)81.681,2384.111,1286.701,3687.990,8587.931,53
Original features81.351,8384.381,1386.511,1988.210,8488.151,13
N2V (transductive)82.691,2184.901,0886.931,3887.681,0988.271,29
CiteSeer
N2V (inductive)71.101,3773.021,2574.791,4076.931,0577.631,99
Feature Propagation69.461,0672.031,4675.810,6576.301,2377.151,61
iN2V (own)68.041,3971.480,8975.110,9176.691,4277.152,36
Original features71.641,3073.711,2475.941,1876.781,3477.362,04
N2V (transductive)70.911,1373.120,9975.690,7976.711,3477.181,54
Pubmed
N2V (inductive)84.120,5085.410,3986.420,5487.240,4587.920,63
Feature Propagation83.930,4485.300,4386.610,3987.180,3687.630,54
iN2V (own)83.510,4984.860,4686.440,5387.570,4387.970,81
Original features85.020,4285.790,4287.470,4588.210,4288.850,42
N2V (transductive)85.200,5087.160,4487.200,5288.230,5688.800,51
Computers
N2V (inductive)85.441,3289.010,7790.830,5491.640,5292.290,52
Feature Propagation87.251,0289.950,5391.250,4991.930,3592.310,58
iN2V (own)88.720,4890.280,2791.510,3791.940,5292.120,37
Original features87.440,5890.170,2091.380,2991.890,2292.470,56
N2V (transductive)90.540,2991.460,3392.190,3792.530,3192.740,31
Photo
N2V (inductive)91.321,8193.170,7794.930,4895.010,4595.570,51
Feature Propagation93.170,6194.390,3495.090,3995.250,4495.610,73
iN2V (own)93.820,5494.710,3795.230,4195.610,4795.750,68
Original features93.470,6094.420,5995.020,5895.370,2695.570,86
N2V (transductive)94.330,3394.950,2495.420,4295.630,3695.920,67
WikiCS
N2V (inductive)77.940,9780.220,8582.030,8583.060,8284.371,17
Feature Propagation79.690,9681.510,9083.260,7584.430,7384.980,97
iN2V (own)81.210,5482.700,8183.750,7184.630,6385.110,77
Original features80.950,8282.640,8584.100,4684.610,7984.950,62
N2V (transductive)81.570,6383.040,5984.000,5084.830,5985.221,05
Actor
N2V (inductive)30.700,7331.710,7533.100,8333.640,7934.012,26
Feature Propagation30.510,7231.831,0433.101,2133.351,0533.991,54
iN2V (own)30.810,8132.350,7233.180,1733.341,3134.211,92
Original features30.820,8632.230,8732.960,9633.721,0334.391,51
N2V (transductive)29.450,7131.130,9632.501,0033.351,3433.672,04
Amazon-ratings
N2V (inductive)40.230,7843.040,6945.800,7548.150,7451.231,01
Feature Propagation41.740,6643.780,5546.850,7549.000,5851.490,88
iN2V (own)41.500,7243.620,8147.160,7249.510,5351.190,94
Original features40.670,5642.480,6646.060,5348.750,6050.641,70
N2V (transductive)42.020,5344.650,6148.960,2551.480,6753.070,91
Roman-empire
N2V (inductive)49.851,9257.090,9563.170,9771.360,5776.532,07
Feature Propagation48.421,1154.870,9562.280,5569.300,8474.910,91
iN2V (own)43.341,1255.040,5964.400,9069.460,5174.141,04
Original features52.230,8259.460,8068.980,6074.950,4378.891,37
N2V (transductive)55.550,5262.070,7868.740,6373.500,9376.511,16
+ +Table 18. Comparison of best iN2V variant vs baselines concatenated with the original graph features; GIN accuracy. Gray numbers are not directly comparable as they use additional information (transductive setup). + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)81.531.4484.261.1886.531.1087.402.2687.381.40
Feature Propagation82.541.4084.161.3485.831.4487.441.3487.381.42
iN2V (own)81.681.2383.801.0286.511.1287.321.0187.791.43
Original features82.401.0884.050.7986.701.3988.081.0587.340.95
N2V (transductive)82.421.1184.280.8986.431.2187.211.0688.341.33
Citeseer
N2V (inductive)71.041.0773.640.9974.560.9776.681.2676.642.60
Feature Propagation69.510.6572.201.2275.121.1776.871.0177.602.02
iN2V (own)68.470.8071.621.1474.311.0076.751.4977.362.20
Original features71.660.8673.741.1975.441.0876.390.7777.181.80
N2V (transductive)70.300.9972.960.7974.681.2275.911.4477.212.40
Pubmed
N2V (inductive)84.010.3785.530.3586.650.4287.290.4788.050.35
Feature Propagation84.160.2285.550.3086.810.5487.510.3988.040.53
iN2V (own)83.750.4984.950.3386.790.5387.650.3188.000.58
Original features84.940.2786.030.4387.310.3888.080.5088.470.59
N2V (transductive)84.680.4686.000.3986.630.5787.470.3987.740.43
Computers
N2V (inductive)85.461.3088.310.8490.290.3591.300.6991.540.45
Feature Propagation87.070.6289.520.3890.950.3791.490.4091.880.64
iN2V (own)88.600.1790.090.4191.380.3891.760.3391.800.52
Original features87.570.3489.470.2690.970.3391.430.4191.850.45
N2V (transductive)90.040.4391.090.3191.710.4392.040.3991.930.57
Photo
N2V (inductive)91.261.3092.960.7594.830.4295.050.4295.350.56
Feature Propagation92.870.5894.240.5694.920.4395.200.5595.410.52
iN2V (own)93.810.3894.750.3795.250.3895.560.4895.760.73
Original features93.420.6294.300.5495.050.3095.340.5195.700.58
N2V (transductive)94.200.3494.930.3495.600.1595.650.3595.920.56
WikiCS
N2V (inductive)77.691.4680.480.4382.430.7783.800.5484.940.88
Feature Propagation79.431.1581.610.7182.960.8384.290.5784.741.02
iN2V (own)81.260.6782.980.7284.260.5384.800.6785.490.93
Original features80.820.8982.400.8183.890.5584.730.8285.450.54
N2V (transductive)81.700.6183.160.6784.130.7384.780.5385.220.73
Actor
N2V (inductive)30.391.0332.110.7533.201.1234.320.8534.801.63
Feature Propagation30.280.9731.650.9133.080.5933.850.9534.051.40
iN2V (own)30.570.6532.620.8433.920.9334.081.1734.871.11
Original features30.240.7231.800.8832.970.6533.700.7835.501.91
N2V (transductive)28.950.7130.560.9332.481.0433.381.1333.861.53
Amazon-ratings
N2V (inductive)40.160.5942.100.5945.360.7448.740.8453.100.77
Feature Propagation41.300.7343.900.5647.440.6850.390.8853.590.83
iN2V (own)41.320.8043.580.7147.540.6850.990.7654.231.12
Original features39.870.6741.480.5245.300.6748.181.1752.181.28
N2V (transductive)42.210.4745.170.6349.530.5151.990.7454.001.05
Roman-empire
N2V (inductive)57.600.9762.770.4968.750.4073.500.4676.170.69
Feature Propagation56.491.1761.360.6067.700.7072.280.5375.401.06
iN2V (own)52.690.7659.170.5666.860.4671.910.5475.460.84
Original features58.010.3362.580.7469.230.4473.740.5576.290.69
N2V (transductive)61.380.6867.920.6473.500.5075.710.5076.590.89
+ +Table 19. GAT on different iN2V setups concatenated with the original graph features. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)80.652.1483.121.2286.081.5287.251.0988.271.48
frozen (λ = 1)79.191.5682.241.6285.431.3986.991.3487.931.53
post-hoc81.751.2783.061.6786.701.3686.851.4687.751.87
p-h w losses81.781.6783.551.4686.530.9387.451.1887.121.99
p-h w sampling81.681.2384.111.1286.161.1487.990.8588.081.58
Citeuser
N2V (inductive)71.101.3773.021.2574.791.4076.931.0577.631.99
frozen (λ = 1)67.901.2071.181.1575.110.9175.711.3076.852.04
post-hoc68.101.2371.351.1674.061.5976.691.4277.152.36
p-h w losses68.041.3971.480.8973.711.6875.801.8976.612.00
p-h w sampling68.061.2471.141.1174.421.1775.641.3977.062.45
Pubmed
N2V (inductive)84.120.5085.410.3986.420.5487.240.4587.920.63
frozen (λ = 1)82.830.6484.930.5486.350.5786.890.5287.170.54
post-hoc83.400.3184.700.4086.440.5387.570.4387.970.81
p-h w losses83.510.4984.860.4686.130.5487.510.3087.880.69
p-h w sampling83.410.4784.650.4386.610.4287.620.3788.150.51
Computers
N2V (inductive)85.441.3289.010.7790.830.5491.640.5292.290.52
frozen (λ = 1)87.930.9290.060.3591.300.3692.030.5292.200.54
post-hoc88.370.5090.280.2791.370.2691.940.5292.230.44
p-h w losses88.390.6190.250.3591.470.5191.890.4392.120.59
p-h w sampling88.720.4890.130.2791.510.3792.130.3992.120.37
Photo
N2V (inductive)91.321.8193.170.7794.930.4895.010.4595.570.51
frozen (λ = 1)92.610.5094.160.5995.010.2795.180.4995.480.59
post-hoc93.760.4794.680.5595.160.3895.350.4495.780.86
p-h w losses93.660.5294.530.3595.100.4395.160.4995.420.82
p-h w sampling93.820.5494.710.3795.230.4195.610.4795.750.68
WikiCS
N2V (inductive)77.940.9780.220.8582.030.8583.060.8284.371.17
frozen (λ = 1)79.100.9281.170.6982.720.6683.160.7083.900.82
post-hoc81.210.5482.600.7283.550.5284.560.6085.110.77
p-h w losses81.020.5982.620.7183.610.7684.710.6684.870.88
p-h w sampling81.020.7882.700.8183.750.7184.630.6385.291.16
Actor
N2V (inductive)30.700.7331.710.7533.100.8333.640.7934.012.26
frozen (λ = 1)29.920.8730.970.6132.680.7733.281.3333.202.10
post-hoc30.421.5532.000.7932.830.8533.680.8034.241.32
p-h w losses30.660.5932.350.7233.181.1733.840.9233.111.50
p-h w sampling30.810.8131.800.9233.200.8733.341.3134.211.92
Amazon-ratings
N2V (inductive)40.230.7843.040.6945.800.7548.150.7451.231.01
frozen (λ = 1)40.960.6943.350.4147.160.7249.510.5351.761.38
post-hoc41.300.4543.440.5546.380.6248.880.5151.481.32
p-h w losses41.500.7243.420.5646.630.7649.080.7951.190.94
p-h w sampling41.310.7043.620.8146.650.6348.320.6948.971.41
Roman-empire
N2V (inductive)49.851.9257.090.9563.170.9771.360.5776.532.07
frozen (λ = 1)43.201.3654.020.8561.610.8368.650.6274.191.65
post-hoc43.341.1254.621.0464.400.9069.460.5174.141.04
p-h w losses43.261.2855.040.5963.810.6566.720.9573.951.33
p-h w sampling41.550.8853.680.7160.860.9467.161.0073.660.99
+ +Table 20. GIN on different iN2V setups concatenated with the original graph features. + +
Percentage of training data
Dataset10%20%40%60%80%
Cora
N2V (inductive)81.531.4484.261.1886.531.1087.402.2687.381.40
frozen (λ = 1)79.731.6982.981.4985.441.2886.421.7887.791.43
post-hoc81.681.2382.831.6886.511.1286.921.4287.491.74
p-h w losses81.581.3483.341.5686.641.0786.621.5787.601.59
p-h w sampling81.661.4183.801.0285.950.7487.321.0187.791.60
Citeuser
N2V (inductive)71.041.0773.640.9974.560.9776.681.2676.642.60
frozen (λ = 1)67.491.0271.321.2974.311.0076.121.6377.032.22
post-hoc68.470.8071.681.2474.371.7176.751.4977.362.20
p-h w losses68.201.0071.621.1474.221.4775.621.4676.941.59
p-h w sampling68.590.8871.621.1474.621.4275.941.3376.882.03
Pubmed
N2V (inductive)84.010.3785.530.3586.650.4287.290.4788.050.35
frozen (λ = 1)82.460.7585.090.4286.600.4987.270.5187.650.43
post-hoc83.720.3784.950.3386.790.5387.650.3188.090.79
p-h w losses83.640.3984.990.4586.390.5487.430.4888.020.66
p-h w sampling83.750.4984.990.3986.460.4887.550.3988.000.58
Computers
N2V (inductive)85.461.3088.310.8490.290.3591.300.6991.540.45
frozen (λ = 1)88.120.6489.780.2991.010.3691.680.5291.660.61
post-hoc88.520.2990.100.2491.380.3891.680.3891.780.70
p-h w losses88.620.3190.270.3491.430.3691.760.3391.940.55
p-h w sampling88.600.1790.090.4191.320.3491.840.4491.800.52
Photo
N2V (inductive)91.261.3092.960.7594.830.4295.050.4295.350.56
frozen (λ = 1)92.640.5294.110.6594.670.3795.230.4695.250.74
post-hoc93.640.5794.620.3095.310.4995.390.4995.730.62
p-h w losses93.520.4694.510.4995.140.3595.350.6795.700.68
p-h w sampling93.810.3894.750.3795.250.3895.560.4895.760.73
WikiCS
N2V (inductive)77.691.4680.480.4382.430.7783.800.5484.940.88
frozen (λ = 1)79.260.9881.150.5982.960.7484.020.6884.651.07
post-hoc81.330.7082.980.7284.120.4784.800.6785.270.84
p-h w losses81.260.6782.970.7084.260.5384.910.6585.210.78
p-h w sampling81.370.7783.050.6484.130.5084.830.6185.490.93
Actor
N2V (inductive)30.391.0332.110.7533.201.1234.320.8534.801.63
frozen (λ = 1)29.891.0531.460.9932.680.9633.441.1534.372.04
post-hoc30.690.7632.670.8733.461.1134.081.1734.871.11
p-h w losses30.570.6532.620.8433.920.9333.630.9834.131.99
p-h w sampling30.600.7232.330.6533.290.8933.721.1934.421.85
Amazon-ratings
N2V (inductive)40.160.5942.100.5945.360.7448.740.8453.100.77
frozen (λ = 1)40.960.6843.820.5247.540.6850.990.7654.231.12
post-hoc41.400.6643.530.6846.870.5750.270.5652.741.31
p-h w losses41.320.8043.500.7146.740.6950.530.6953.211.21
p-h w sampling41.260.6743.580.7146.790.5850.040.9250.221.22
Roman-empire
N2V (inductive)57.600.9762.770.4968.750.4073.500.4676.170.69
frozen (λ = 1)52.230.9159.190.8666.860.4671.910.5475.460.84
post-hoc51.271.3458.780.9266.500.5771.890.4774.810.67
p-h w losses52.690.7659.350.5166.270.6171.220.5674.971.02
p-h w sampling50.421.0559.170.5666.830.6171.840.6374.790.80
\ No newline at end of file diff --git a/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/images.zip b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bdc251643fe41050d0c6a63874e2f76d921894d3 --- /dev/null +++ b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c0e09dcc81eef835022036d5e2f3ebddc60b9ad6953635de62377f98ef4f4f5 +size 3745476 diff --git a/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/layout.json b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..634151ba64caed78c2fae631301f7e29a4f9db96 --- /dev/null +++ b/ICML/2025/iN2V_ Bringing Transductive Node Embeddings to Inductive Graphs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:408f69a01ba9738868afc8f748167dbdb726000949969abe3a77d4b2d9e59601 +size 654678 diff --git a/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_content_list.json b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f93dfd9cffa94a613d24cf9e3ceb807e2df0ecc5 --- /dev/null +++ b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9245c7ed6e768826e6cdca5b63b504d87df6222488eec88c8962938f06aa81c +size 84065 diff --git a/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_model.json b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_model.json new file mode 100644 index 0000000000000000000000000000000000000000..753792f09615d88a36f2f4a5066b4090a780d9d0 --- /dev/null +++ b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f12c2a78571de494ec5171993fca3195392038ea3ea909bd0d3020ce779a2bc +size 105526 diff --git a/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_origin.pdf b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6cc86017b3fc2e1244386a69220aaf2f551d659b --- /dev/null +++ b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/c209dea3-671d-40ba-8304-63ed84180a96_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a033e7e90146cba7156e1f23e57b262bb45c34946b6e63f304a5e1b9ad1474bf +size 1542711 diff --git a/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/full.md b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9deb2243e3324c578ffecc9c6547bec2c20e86af --- /dev/null +++ b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/full.md @@ -0,0 +1,418 @@ +# Polybasic Speculative Decoding Through a Theoretical Perspective + +Ruilin Wang1 Huixia Li2 Yuexiao Ma1,2 Xiawu Zheng1,3,4 Fei Chao1 Xuefeng Xiao2 Rongrong Ji1,3 + +# Abstract + +Inference latency stands as a critical bottleneck in the large-scale deployment of Large Language Models (LLMs). Speculative decoding methods have recently shown promise in accelerating inference without compromising the output distribution. However, existing work typically relies on a dualistic draft-verify framework and lacks rigorous theoretical grounding. In this paper, we introduce a novel polybasic speculative decoding framework, underpinned by a comprehensive theoretical analysis. Specifically, we prove a fundamental theorem that characterizes the optimal inference time for multi-model speculative decoding systems, shedding light on how to extend beyond the dualistic approach to a more general polybasic paradigm. Through our theoretical investigation of multi-model token generation, we expose and optimize the interplay between model capabilities, acceptance lengths, and overall computational cost. Our framework supports both standalone implementation and integration with existing speculative techniques, leading to accelerated performance in practice. Experimental results across multiple model families demonstrate that our approach yields speedup ratios ranging from $3.31 \times$ to $4.01 \times$ for LLaMA2-Chat 7B, up to $3.87 \times$ for LLaMA3-8B, up to $4.43 \times$ for Vicuna7B and up to $3.85 \times$ for Qwen2-7B—all while preserving the original output distribution. We release our theoretical proofs and implementation code to facilitate further investigation into polybasic speculative decoding. + +# 1. Introduction + +Large Language Models (LLMs) have substantially advanced Natural Language Processing (NLP), achieving leading performance in a wide range of tasks. Yet their exceptional capabilities are tempered by significant computational demands, particularly in low-latency scenarios. Among multiple acceleration techniques, speculative decoding (Stern et al., 2018; Leviathan et al., 2023; Xia et al., 2023; Chen et al., 2023a; Anonymous, 2025b;a; Teng et al., 2024; Zhang et al., 2024b) has emerged as a strategy to speed up inference while preserving output fidelity. + +The current speculative decoding ecosystem largely hinges on draft-then-verify paradigms, which spawn various subdirections such as the design of lightweight draft models (Leviathan et al., 2023; Xia et al., 2023; Chen et al., 2023a; Kim et al., 2024a; Svirschevski et al., 2024; Yin et al., 2024; Sadhukhan et al., 2024), hierarchical token structures (Stern et al., 2018; Miao et al., 2024; Du et al., 2024), and unified architectures (Yi et al., 2024; Cai et al., 2024). Verification strategies typically follow three approaches: greedy sampling, speculative sampling (Leviathan et al., 2023), and typical acceptance (Cai et al., 2024). Despite these efforts, current strategies remain limited by a dualistic relationship between draft and target models (Qin et al., 2024; Liu et al., 2024; Gui et al., 2024; Khisti et al., 2024), affecting key parameters such as acceptance length due to inherent capacity disparities between the two models. While some recent works (Chen et al., 2023b; Kim et al.; Spector & Ré, 2023) investigate multi-level drafts, they still employ a singular top-level target model. Moreover, the field has hitherto lacked an overarching theoretical framework to guide system design and provide robust performance guarantees. + +In this paper, we introduce a principled polybasic speculative decoding framework that uses multiple interconnected models, grounded in a thorough theoretical analysis. Our investigation yields two central insights. First, we derive a fundamental relationship between the number of forward passes and average acceptance lengths that dictates optimal system-level inference speed. This relationship allows us to precisely quantify potential inference speedups when adding additional models. Second, we establish the capacity of speculative sampling to enhance stability in acceptance lengths. By optimizing sampling parameters, we can reduce + +variance in token acceptance to achieve more predictable performance. + +Building on these insights, we propose a unified architecture for polybasic speculative decoding wherein multiple draft models coordinate with each other and with a single target model. Our implementation guidelines detail how to select models, set speculation lengths, and implement multi-stage verification procedures that maximize throughput. Experimental evaluations demonstrate that this approach outperforms typical dualistic methods on diverse tasks including MT-bench (Zheng et al., 2023), translation, summarization, QA, mathematical reasoning, and Retrieval-Augmented Generation (RAG). Our empirical results indicate that the proposed polybasic framework maintains output fidelity while delivering speedup ratios from $3 \times$ to over $4 \times$ across a range of widely used LLMs (e.g., Vicuna-7B, LLaMA2-Chat 7B, LLaMA3-8B). + +The main contributions of this work are summarized as follows: + +- We develop a formal theoretical framework for polybasic speculative decoding, identifying the system-level dependencies between model forward-pass cost, acceptance lengths, and stable acceleration performance. +- We prove a fundamental theorem that provides a rigorous expression for optimal inference time in multi-model speculative decoding, highlighting conditions under which additional models improve speedups. +- We show that speculative sampling significantly reduces variance in token acceptance lengths for multi-model settings, increasing stability and improving inference throughput. +- Our empirical investigation demonstrates the effectiveness of the proposed polybasic approach, achieving notable speedups (up to $4.43 \times$ ) on widely used LLMs across various tasks, while preserving the target model's output distribution. + +# 2. Related Work + +The concept of speculative decoding originated from blockwise parallel decoding (Stern et al., 2018), showcasing the viability of partially parallel language generation. More recently, research on speculative decoding (Stewart et al., 2024; Zafrir et al., 2024) has coalesced around two dimensions: drafting strategies for token prediction and verification mechanisms for ensuring correctness. + +Drafting Strategies. Drafting approaches typically follow either independent or self-drafting protocols. Independent drafting involves utilizing smaller or more efficient models + +to propose candidate tokens, later verified by a larger model. Methods range from training specialized drafters (Leviathan et al., 2023; Xia et al., 2023; Chen et al., 2023a; Kim et al., 2024a; Metel et al., 2024) to zero-shot usage of pre-existing models (Spector & Ré, 2023). Self-drafting employs the same model at intermediate stages, as in blockwise decoding (Stern et al., 2018; Xiao et al., 2024), early exiting (Yang et al., 2023), or mask-predict (Zhao et al., 2024), aiming to amortize computation within the same architecture. + +Verification Mechanisms. Verification primarily ensures that proposed tokens maintain consistency with the target distribution. Greedy verification (Kim et al., 2024a; Xia et al., 2023; Agrawal et al., 2024) is conceptually straightforward but may hinder speedups for certain tasks. Speculative sampling (Leviathan et al., 2023) introduces a probabilistic acceptance rule that adaptively filters tokens while retaining a high acceptance length. Token-tree-based verification (Miao et al., 2024; Spector & Ré, 2023; Lu et al., 2024; Gao et al., 2024) provides hierarchical checks, which can be beneficial for highly parallel architectures. + +Recent Advances and Limitations. Recent work on cascade or multi-level drafting (Chen et al., 2023b; Sun et al., 2024) has partially moved beyond the dualistic draft-target scheme. TRIFORCE (Sun et al., 2024) tackles long sequence generation by introducing a two-level hierarchy with retrieval-based drafting and partial KV cache as an intermediate layer, achieving up to $2.31 \times$ speedup for Llama2-7B-128K. CS Drafting (Chen et al., 2023b), on the other hand, employs vertical and horizontal cascades to eliminate neural autoregressive generation and optimize time allocation in drafting, resulting in up to $81\%$ additional speedup over standard speculative decoding. While TRIFORCE focuses on memory efficiency in long-context scenarios through KV cache optimization, CS Drafting targets general inference optimization through cascade structures and statistical drafting. + +However, these approaches usually rely on empirical heuristics without a unified theoretical framework to guide model selection, acceptance-length control, and stability analysis. Our work explicitly addresses these gaps by introducing a comprehensive theoretical treatment of polybasic speculative decoding and validating the resulting system design empirically. + +# 3. Polybasic Speculative Decoding Framework + +Although speculative decoding has been demonstrated as an effective technique for single-model verification, its acceleration potential remains capped by the inherent draft-target capacity gap in dualistic paradigms. We propose a polybasic speculative decoding framework, which systematically + +![](images/72a0591a80329cff2aebed5690c834da63d693ce248e661e6758dd5e69f6118b.jpg) +(a) Dualistic speculative decoding + +![](images/575380489b5fae0510ebbb13d039eebf3073a4345a06774fffaa57836ea24167.jpg) +(b) Polybasic speculative decoding +Figure 1. Comparison of speculative decoding frameworks. (a) Traditional dualistic approach with a single draft model. (b) Our polybasic framework with multiple draft models achieves superior performance ( $4 \times$ speedup and 8-10 tokens acceptance length) while maintaining good generalization ability. The framework demonstrates significant improvements over the dualistic baseline. + +employs multiple models to increase parallelism and acceptance length while preserving fidelity to the final target distribution. Below, we detail the core problem setting, theoretical foundations, and practical instantiations. + +# 3.1. Problem Formulation + +Let us consider a chain of models $\mathcal{M} = \{M_1, \ldots, M_n\}$ , where $M_1$ is the final target model we wish to replicate in distribution, and $M_2, \ldots, M_n$ act as drafters at progressively lower capacity (higher index indicates a smaller or faster model). Let $\mathcal{V}$ be the vocabulary, and $p_i(x \mid x_{\leq t}) = M_i(x_{\leq t})$ the distribution over $\mathcal{V}$ given context $x_{\leq t}$ . + +At each decoding step $t$ , $M_{n}$ drafts a block of $K$ tokens, verified in ascending order by $M_{n-1}, M_{n-2}, \ldots, M_{1}$ . Tokens are accepted if they do not exceed a certain mismatch criterion, reflecting speculative sampling, greedy matching, or another verification rule. Define: + +$$ +L _ {i} = \mathbb {E} [ (\# \text {o f c o n s e c u t i v e t o k e n s a c c e p t e d b y} M _ {i}) ], \tag {1} +$$ + +i.e., the expected block length accepted when verifying with model $M_{i}$ . Denoting by $F_{i}$ the number of forward passes that $M_{i}$ must perform, our goal is to minimize the total inference time + +$$ +T = \sum_ {i = 1} ^ {n} F _ {i} \cdot T _ {i}, \tag {2} +$$ + +where $T_{i}$ is the cost of a single forward pass for model $M_{i}$ . + +# 3.2. Theoretical Foundations + +We establish fundamental properties of multi-model (polybasic) speculative decoding that govern how additional models + +impact computational cost and acceptance lengths. Our analysis focuses on two main aspects: (i) optimal inference time and (ii) stability of acceptance lengths. + +Optimal Inference Time. In a conventional dualistic system with one draft model $(M_2)$ and a single target $(M_1)$ , the total inference time is approximately equal to $\frac{N}{L_1} T_1 + \beta \frac{N}{L_1} T_2$ , where $N$ is the sequence length and $\beta$ is a system-dependent scaling factor reflecting the final draft model's capability. In a polybasic setting with $n > 2$ models, additional drafting layers can bring more tokens per verification cycle if acceptance lengths between intermediate pairs are high. However, each additional model also introduces its own forward-pass costs. Formally, we have: + +Lemma 3.1 (Optimal Inference Time). For an $n$ -model polybasic system generating $N$ tokens, the total inference time $T$ can be expressed as: + +$$ +T = \sum_ {i = 1} ^ {n - 1} \frac {N}{L _ {i}} \cdot T _ {i} + \beta \cdot \frac {N}{L _ {n - 1}} T _ {n}, \tag {3} +$$ + +where $L_{i}$ is the expected acceptance length for verification by $M_{i}$ , and $\beta$ is a system-dependent scaling factor reflecting the final draft model's capability. + +Sketch of Proof. We segment the total generation length $N$ into accepted blocks validated by pairs $(M_i, M_{i+1})$ . Each model $M_i$ must run as many forward passes as needed to accept $N$ tokens in total. A more detailed version of the proof incorporates the block acceptance process for each adjacency $(M_i, M_{i+1})$ , culminating in the time decomposition of Equation (3). + +Model Selection Criterion. The next question is whether introducing a new model $M_{\mathrm{new}}$ between $M_{i}$ and $M_{i+1}$ improves $T$ . The improvement depends on whether the reduced cost from higher acceptance length outweighs the additional forward-pass overhead. Formally: + +Theorem 3.2 (Model Insertion Efficiency). Adding $M_{\mathrm{new}}$ between $M_{i}$ and $M_{i+1}$ decreases total inference time if and only if it achieves a sufficiently large increase in acceptance lengths, balanced against its forward-pass cost $T_{\mathrm{new}}$ . Concretely, if $L_{\mathrm{new}}$ is the acceptance length when verifying tokens from $M_{\mathrm{new}}$ against $M_{i}$ , and $L_{i+1}'$ is the acceptance length from $M_{i+1}$ 's perspective, then improvement occurs if: + +$$ +\frac {T _ {\text {n e w}}}{T _ {i}} < L _ {\text {n e w}} \left(\frac {1}{L _ {i}} - \frac {1}{L _ {i - n e w}}\right) \quad o r +$$ + +$$ +\frac {T _ {\mathrm {n e w}}}{T _ {i + 1}} < \beta \left(\frac {L _ {\mathrm {n e w} - (i + 1)}}{L _ {i}} - 1\right) +$$ + +Proof. First, we prove the case for three models: + +For $i = 2$ + +$$ +T = \frac {N}{L _ {1}} \cdot T _ {1} + \beta \cdot \frac {N}{L _ {1}} \cdot T _ {2} \tag {4} +$$ + +For $i = 3$ + +$$ +T = \frac {N}{L _ {1} ^ {\prime}} \cdot T _ {1} + \frac {N}{L _ {2} ^ {\prime}} \cdot T _ {2} ^ {\prime} + \beta \cdot \frac {N}{L _ {2} ^ {\prime}} \cdot T _ {3} ^ {\prime} \tag {5} +$$ + +where $T_{i}$ is the inference time of the $i$ -th model, $\alpha$ is considered to be equal in both equations, and $T_{2} = T_{3}^{\prime}$ , $L_{1}^{\prime} > L_{2}^{\prime} > L_{1}$ . + +We can calculate the difference between Equation 4 and Equation 5: + +$$ +\begin{array}{l} N \cdot \left(\frac {1}{L _ {1} ^ {\prime}} - \frac {1}{L _ {1}}\right) \cdot T _ {1} + \frac {N}{L _ {2} ^ {\prime}} \cdot T _ {2} ^ {\prime} \\ + \beta \cdot N \cdot \left(\frac {1}{L _ {2} ^ {\prime}} - \frac {1}{L _ {1}}\right) \cdot T _ {2} < 0 \\ \end{array} +$$ + +The expression is less than 0 if either of the following conditions is met: + +Condition 1: Sum of the first two terms is less than 0 + +$$ +\begin{array}{l} N \cdot \left(\frac {1}{L _ {1} ^ {\prime}} - \frac {1}{L _ {1}}\right) \cdot T _ {1} + \frac {N}{L _ {2} ^ {\prime}} \cdot T _ {2} ^ {\prime} < 0 \\ \Leftrightarrow \frac {T _ {2} ^ {\prime}}{T _ {1}} < L _ {2} ^ {\prime} \cdot \left(\frac {1}{L _ {1}} - \frac {1}{L _ {1} ^ {\prime}}\right) \\ \end{array} +$$ + +OR + +Condition 2: Sum of the last two terms is less than 0 + +$$ +\begin{array}{l} \frac {N}{L _ {2} ^ {\prime}} \cdot T _ {2} ^ {\prime} + \beta \cdot N \cdot \left(\frac {1}{L _ {2} ^ {\prime}} - \frac {1}{L _ {1}}\right) \cdot T _ {2} < 0 \\ \Leftrightarrow \frac {T _ {2} ^ {\prime}}{T _ {2}} < \beta \cdot \left(\frac {L _ {2} ^ {\prime}}{L _ {1}} - 1\right) \\ \end{array} +$$ + +This result generalizes to inserting a new model at any position in a polybasic system. When inserting model $M_{new}$ between $M_i$ and $M_{i+1}$ , we can treat all models before the insertion point $(M_1 \text{ through } M_i)$ as a single composite model, and all models after the insertion point $(M_{i+1} \text{ through } M_k)$ as another composite model. This reduces the general case to the three-model case proven above, where the composite model before insertion corresponds to $M_1$ , the new model corresponds to $M_2'$ , and the composite model after insertion corresponds to $M_2$ . + +Therefore, the same conditions for efficiency improvement apply at any insertion point in the model sequence, subject to the specified constraints on acceptance lengths. + +Stability Analysis. Beyond achieving higher acceptance lengths, stability in acceptance is crucial for consistent speedups. We analyze speculative sampling with probability $p_i = 1 - \alpha$ to accept a token from $M_{i + 1}$ if it is likely under $M_{i}$ 's distribution. Let $\sigma_i^2$ be the variance of acceptance lengths. As shown below, for multi-model chaining, acceptance length variance grows with smaller $p_i$ , implying that high acceptance probability supports stable performance: + +Theorem 3.3 (Sampling Stability). In the model chain using speculative sampling with acceptance probability $p_i = 1 - \alpha$ , the variance in acceptance length satisfies: + +$$ +\sigma^ {2} = \frac {\alpha [ 1 - (n ^ {2} - 1) \alpha^ {n} ] - (n ^ {2} - 1) \alpha^ {n + 1}}{(1 - \alpha) ^ {2}}. +$$ + +Proof. Let $p = 1 - \alpha$ be the probability of accepting a token. For a truncated geometric distribution of maximum $n$ trials, define: + +$$ +S = \sum_ {k = 1} ^ {n - 1} k \cdot (1 - p) ^ {k - 1}. +$$ + +Using standard manipulation (method of differences), one can show: + +$$ +S = \frac {1 - (1 - p) ^ {n - 1} - n (1 - p) ^ {n - 1} + (1 - p) ^ {n}}{p ^ {2}}. +$$ + +Hence, the expectation of the acceptance length, allowing up to $n$ tokens, is + +$$ +E [ N ] = \frac {1 - (1 - p) ^ {n}}{p}. +$$ + +We similarly compute + +$$ +E [ N ^ {2} ] = \sum_ {k = 1} ^ {n - 1} k ^ {2} \cdot p \cdot (1 - p) ^ {k - 1} + n ^ {2} \cdot (1 - p) ^ {n - 1}. +$$ + +After careful algebra (omitted for brevity), one obtains: + +$$ +\begin{array}{l} E [ N ^ {2} ] = \\ \frac {1 - (1 - p) ^ {n} (n ^ {2} + 2 n - 1) + 2 (1 - p) ^ {n + 1} (n - 1)}{p ^ {2}}. \\ \end{array} +$$ + +Thus, + +$$ +\operatorname {V a r} (N) = E [ N ^ {2} ] - (E [ N ]) ^ {2}, +$$ + +leading to the formula stated in Theorem 3.3. + +![](images/9b29a2ff602114d007154c19cc00d5566d416d62bfd36b8f191b961aec212ea4.jpg) + +Collectively, these results establish a principled foundation for polybasic speculative decoding. Given model inference times $T_{i}$ and acceptance probabilities, one can estimate the optimal system layout via Equation (3), gauge whether a new model confers net benefit (Theorem 3.2), and use speculative sampling to ensure stable acceptance lengths (Theorem 3.3). + +# 3.3. Three-Model System Design + +To illustrate practical deployment, we describe a three-model system ( $M_1$ : target, $M_2$ : intermediate, $M_3$ : lightweight) that exemplifies the design choices guided by our theory. + +Architecture. Our reference system includes: + +- $M_{1}$ (Target): A high-capacity model such as Vicuna7B or LLaMA2-Chat 7B. +- $M_2$ (Intermediate): A quantized 4-bit version of $M_1$ or a comparable mid-size model to bridge the capacity gap. +- $M_{3}$ (Draft): A lightweight, fast model (e.g., EA-GLE2(Li et al., 2024)) for initial token proposals. + +By Theorem 3.2, $M_2$ should be inserted if it raises the acceptance length enough to offset its own forward-pass cost. + +Staged Verification. Tokens first pass from $M_3$ to $M_2$ , whose verification is relatively fast. Accepted tokens are then periodically verified by $M_1$ . This two-stage verification acts as a filter, rapidly discarding problematic tokens at the cheaper $M_2$ stage. The threshold for passing tokens to $M_1$ is set to accumulate a small block (e.g., $\mu$ tokens) to amortize $M_1$ 's forward-pass overhead. This setup capitalizes on the fact that $M_3$ can generate numerous tentative tokens quickly, while $M_1$ only checks consolidated blocks of already moderately validated tokens. + +Algorithm 1 Polybasic Speculative Decoding (Three Models) +1: Input: Target model $M_1$ , intermediate model $M_2$ , draft model $M_3$ +2: Input: Context $x_{\leq t}$ , total length $N$ +3: Input: Draft length $K$ , threshold $\mu$ +4: Initialize: $t \gets |x_{\leq t}|$ , accepted $\gets \emptyset$ , cnt $\gets 0$ +5: while $t < N$ do +6: // Draft and verify with $M_3$ and $M_2$ +7: $\widetilde{x}_{1:K} \gets M_3(x_{\leq t}) \quad //$ Draft +8: $p_{1:K} \gets M_2(x_{\leq t}, \widetilde{x}_{1:K}) \quad //$ Verify +9: for $i = 1$ to $K$ do +10: if $\mathrm{VERIFY}(\widetilde{x}_i, p_i)$ then +11: accepted.append( $\widetilde{x}_i$ ) +12: cnt $\gets$ cnt + 1 +13: else +14: break +15: end if +16: end for +17: // Check if threshold reached for $M_1$ verification +18: if cnt $\geq \mu$ then +19: v $\leftarrow M_1(x_{\leq t}, \text{accepted}) // \text{Verify}$ +20: if $\mathrm{VERIFY}(\text{accepted}, v)$ then +21: $x_{t+1:t+cnt} \gets \text{accepted}$ +22: t $\leftarrow t + cnt$ +23: else +24: x_t+1 $\leftarrow \text{SampleOne}(v_1) // \text{failback acceptance}$ +25: t $\leftarrow t + 1$ +26: end if +27: accepted $\leftarrow \emptyset$ +28: cnt $\leftarrow 0$ +29: end if +30: end while + +Algorithm. Algorithm 1 details a generic procedure for polybasic speculative decoding with three models. The system accumulates tokens verified by $M_2$ until a threshold, then triggers verification by $M_1$ . Upon acceptance or partial acceptance, it appends tokens to the growing output sequence and advances $t$ . Simple or more sophisticated error handling (e.g., partial rollback) can be adopted if $M_1$ rejects tokens. + +Such a staged design is representative rather than exhaustive. Model scaling, verification strategies, and drafting lengths can be adapted to different resource constraints or performance targets. Our theoretical framework offers explicit performance bounds, making the design space more transparent. + +# 3.4. Generalization to Self-Drafting Methods + +Our polybasic speculative decoding framework provides a general theoretical foundation that naturally extends to self- + +drafting approaches, demonstrating the broad applicability of our theoretical principles. This generalization reveals new opportunities for designing efficient speculative decoding systems using model-internal components. + +In self-drafting methods like FFN heads approaches (Cai et al., 2024) (Ankner et al., 2024) (Kim et al., 2024b) and early exiting (Elhoushi et al., 2024) techniques, multiple prediction sources are derived from the same model architecture. These can be viewed as the fundamental draft models in a polybasic system, with the original model serving as the target. For instance, in an FFN heads approach, each head functions as a base draft model capable of generating tokens in parallel, forming the lowest tier in our polybasic hierarchy. Similarly, early exit points at different layers can be treated as draft models with varying capabilities. + +The optimization principles established in Section 3.2 remain applicable in this context, though they require careful algorithmic design to account for the shared computational paths in self-drafting approaches. The relationship between forward pass frequency and acceptance length provides guidance for optimal configuration of these systems, while our stability analysis (Theorem 3.3) informs the design of verification strategies. + +This generalization demonstrates that our polybasic framework provides a unified theoretical foundation for speculative decoding, encompassing both independent draft models and self-drafting approaches. This suggests promising directions for developing new algorithmic techniques that fully exploit the parallel prediction capabilities inherent in these methods while maintaining the theoretical guarantees of our framework. + +# 4. Experiments + +We conduct comprehensive evaluations to validate our theoretical claims and to benchmark the proposed polybasic system against traditional dualistic strategies. Our experiments span various LLMs, tasks, and hyperparameter settings. + +# 4.1. Setup and Metrics + +Models and Tasks. We conducted experiments on Vicuna7B, LLaMA2-chat-7B, and LLaMA3-7B-Instruct. We evaluated our multi-model speculative system in SpecBench(Xia et al., 2024), across multiple tasks including multi-turn conversation, translation, summarization, question answering, mathematical reasoning, and retrieval-augmented generation, employing the MT-bench (Zheng et al., 2023), WMT14 DE-EN, CNN/Daily Mail (Nallapati et al., 2016), Natural Questions (Kwiatkowski et al., 2019), GSM8K (Cobbe et al., 2021), and DPR (Karpukhin et al., 2020). Speculative sampling (Leviathan et al., 2023) conducted experiments with a batch size of 1, similarly, the majority of our experiments + +also adopted this setting. + +Performance Metrics. Following previous speculative decoding studies, we focus on two metrics. Walltime speedup ratio $c$ : ratio of actual decoding time in our system vs. standard autoregressive decoding. Average acceptance length $\mu$ : mean number of consecutively accepted tokens per forward pass by the final (largest) model. + +Quantization and Training Details. For the intermediate model, we adopt 4-bit quantization (Ma et al., 2024) with a group size of 128, balancing reduced inference cost against quality. Draft models are built following EAGLE2, trained on ShareGPT data. Our experiments run on NVIDIA A800 80G GPUs. + +# 4.2. Theoretical Validation + +To empirically validate Theorem 3.2, we conducted two targeted experiments evaluating the impact of inserting additional draft models into a polybasic system. We measured $T_{\mathrm{new}}$ , $L_{\mathrm{new}}$ , and the resultant speedup ratio. Results are summarized in Table 4.2. + +Case 1: Non-Compliant Insertion We inserted a lightweight Vicuna-1B model between Vicuna-7B (target) and EAGLE2 (baseline drafter). Here, $T_{\mathrm{new}} / T_i = 0.80$ , while the acceptance-length improvement factor $L_{\mathrm{new}} \cdot (1 / L_i - 1 / L_{i\text{-new}}) = 0.117$ . Since $0.80 > 0.117$ , Theorem 3.2 predicts a performance degradation. Empirically, the speedup ratio dropped from $2.61 \times$ to $1.08 \times$ , confirming the theoretical prediction. This highlights the necessity of balancing model capacity and computational overhead when expanding the polybasic hierarchy. + +Case 2: Compliant Insertion We inserted a quantized Vicuna-7B (W4A16) model between the original Vicuna-7B and EAGLE2. Here, $T_{\mathrm{new}} / T_i = 0.318$ and $L_{\mathrm{new}} \cdot (1 / L_i - 1 / L_{i\mathrm{-new}}) = 0.330$ . Since $0.318 < 0.330$ , Theorem 3.2 predicts a speedup improvement. Experimentally, the system achieved a $3.48 \times$ speedup, up from $2.61 \times$ . This demonstrates the theorem's utility in guiding effective model selection. + +Case 3: Generalization To substantiate the universal theoretical guidance of Theorem 3.2 across diverse cascaded speculative sampling methodologies, we reproduced Cascade Speculative Drafting (Chen et al., 2023b) and conducted rigorous evaluations spanning multiple model scales including FLAN-T5-XXL, Base, and Small variants. We inserted FLAN-T5-base between FLAN-T5-XXL and FLAN-T5-small. The configuration yields $T_{\mathrm{new}} / T_i = 0.403$ with acceptance metric $L_{\mathrm{new}} \cdot (1 / L_i - 1 / L_{i - \mathrm{new}}) = 0.461$ , satisfying the acceleration criterion $0.403 < 0.461$ as shown in Table 1. The system exhibits statistically significant speedup improvement from $3.19 \times$ to $3.88 \times$ . + +Table 1. Theoretical Validation via Model Insertion + +
CaseTi (ms)Li-newTnew (ms)LnewTi+1 (ms)LiSpeedup
Non-compliant223.8317.613.7744.342.61× → 1.08×
Compliant226.267.004.6744.342.61× → 3.48×
CS Drafting47.523.5019.163.0212.422.283.19×→3.88×
+ +These experiments directly corroborate Theorem 3.2, showing that the theoretical conditions on $T_{\mathrm{new}}$ and $L_{\mathrm{new}}$ are necessary for improving system performance. The results emphasize the framework's ability to rigorously guide model selection and hierarchy design, moving beyond heuristic-driven approaches. + +# 4.3. Effectiveness + +Figures 2 and 3 summarize speedup ratios on various tasks. Our polybasic approach demonstrates clear gains over dualistic baseline systems (including EAGLE2 and Speculative Sampling). Notably: + +- Vicuna-7B achieves $3.16 \times$ on average and up to $4.43 \times$ in mathematical reasoning. +- LLaMA2-Chat 7B attains $3.66 \times$ overall, peaking at $4.10 \times$ in multi-turn conversation. +- LLaMA3-8B yields $3.31 \times -3.87 \times$ speedups, illustrating the method's adaptability to larger model sizes. +- qwen2-7B-Instruct demonstrates a $3.28 \times$ average speedup, which is approximately $69\%$ higher than EA-GLE2's $1.94 \times$ acceleration on the same model. + +Our analysis also shows that average acceptance lengths range from 9.1 to over 10 tokens, significantly higher than typical dual-model methods. This corroborates our theoretical claim that multi-tiered speculation improves acceptance efficiency. + +As shown in Figure 3, our method demonstrates substantial speedups across diverse tasks, with particularly strong performance in math reasoning (up to $4.43 \times$ ) and multi-turn conversation (up to $4.10 \times$ ). However, we observe relatively modest acceleration on summarization tasks, where the speedup ranges from $2.95 \times$ to $3.41 \times$ . This pattern can be attributed to the higher token generation requirements in summarization, which necessitates maintaining KV caches across multiple models in our polybasic speculative decoding framework. Despite this limitation, our approach is orthogonal to KV cache optimization techniques, suggesting potential for further improvements through the integration of cache-focused methods. + +# 4.4. Scalability to Larger Models + +To demonstrate the generalizability of our framework across model scales, we conducted additional experiments with Vicuna-13B and LLaMA-2-chat-70B models. As shown in Table 3, our polybasic approach maintains significant advantages over EAGLE baseline even when scaled to larger models. Specifically, we achieve $2.69 \times$ speedup with Vicuna-13B (vs. EAGLE2's $2.30 \times$ ) and $2.92 \times$ for LLaMA-70B (vs. $2.46 \times$ ), while maintaining substantially higher average acceptance lengths. These results confirm that our method's benefits are not limited to smaller models but extend to larger-scale LLMs. The slightly reduced absolute speedup ratios compared to 7B models align with expectations, as larger models naturally incur higher verification costs that partially offset drafting efficiency gains. + +# 4.5. Ablation Study: Speculative vs. Greedy Sampling + +To validate the impact of speculative sampling on stability, we compare acceptance-length variance from speculative vs. greedy verification. We sample 50 queries in a three-model setup and record acceptance-length distributions (Figure 4). As anticipated, speculative sampling yields smaller variance, indicating more stable acceptance lengths across diverse inputs. This result aligns with Theorem 3.3 and further justifies the use of speculative sampling in multi-tier verification. + +# 4.6. Four-Model System Discussion and Limitations + +While our theoretical analysis suggests the potential benefits of incorporating more models into the polybasic speculative decoding framework, empirical implementation of systems with four or more models faces practical challenges. Under our sufficient (though not necessary) condition for model insertion efficiency, it is currently difficult to find suitable off-the-shelf models that satisfy the theoretical requirements without additional training. This limitation primarily stems from the stringent balance needed between acceptance length improvements and computational overhead. However, we believe this barrier is not fundamental. Through future exploration of complementary optimization techniques, such as advanced KV cache management, model pruning, and quantization, we anticipate achieving breakthroughs in systems with four or more models, potentially + +![](images/966ea51507c7c206d16f91d8e8bdc5b63bb1f1d0e2821006f107d658ed8b6a22.jpg) +Figure 2. Speedup ratios for Vicuna-7B, LLaMA2-Chat 7B, LLaMA3-8B-Instruct and Qwen2-7B-Instruct on SpecBench. Our polybasic system consistently achieves the highest speedups $(3.16 \times -3.66 \times)$ , surpassing EAGLE2 and vanilla baselines. + +Table 2. Average acceptance length $\left( \mu \right)$ and speedup ratio (c) on different tasks. V7B: Vicuna-7B, L3-8B: LLaMA3-8B-Instruct, L2-7B: LLaMA2-Chat-7B, Q2-7B: Qwen2-7B-Instruct. + +
ModelMTTrans.Sum.QAMathRAGOverall
cμcμcμcμcμcμcμ
OurV7B3.77x11.223.07x7.763.01x10.243.65x9.534.43x10.282.98x10.303.48x9.88
L3-8B3.70x9.973.39x8.863.02x9.383.16x9.083.87x10.082.71x9.243.31x9.44
L2-7B4.10x10.473.46x9.153.41x9.863.61x9.494.02x9.993.31x10.083.66x9.84
Q2-7B3.65x9.853.15x8.652.95x9.153.25x8.953.85x9.952.85x9.353.28x9.32
EAGLE2V7B3.19x4.762.07x3.222.59x3.962.45x3.713.19x4.722.15x3.952.61x4.34
L3-8B2.69x3.992.37x3.532.23x3.582.21x3.422.83x4.202.23x3.952.44x3.82
L2-7B3.04x4.482.61x3.962.50x4.042.55x4.053.04x4.682.40x4.192.70x4.30
Q2-7B2.40x3.741.45x2.451.59x3.061.81x2.912.63x4.261.72x3.271.94x3.51
+ +Table 3. Speedup Ratios and Acceptance Lengths on Larger Models + +
MethodModelcμ
OurVicuna-13B2.69×8.62
LLaMA-70B2.92×7.48
EAGLEVicuna-13B2.30×4.42
LLaMA-70B2.46×4.08
+ +unlocking even greater acceleration benefits while maintaining inference quality. + +Our polybasic framework, like other speculative decoding methods, can be limited by large KV cache footprints, which scale with text length. Thus, for tasks with extensive context, the overhead from additional models can be more + +pronounced. As Figure 3 and Table 2 shows, we observe somewhat lower acceleration in summarization and RAG tasks than in shorter contexts. Addressing KV cache constraints via caching techniques (Xiao et al., 2023; Zhang et al., 2024c;a; Jin et al., 2024) is an active research avenue and remains a promising direction for future improvements. + +# 5. Conclusion + +We have presented a polybasic speculative decoding system that systematically extends beyond dualistic draft-target paradigms. By establishing a rigorous theoretical framework, we derived an expression for optimal inference time and showed how speculative sampling stabilizes acceptance lengths in multi-model systems. Extensive experiments spanning multiple tasks and model families corroborate our claims, demonstrating $4 \times$ speedups while preserving the + +![](images/f91841a46d1c1c54eeb26e73ed8ac7074f968dd1460fa47fc40a6a9aaa985134.jpg) + +![](images/386134e4d731081197fb49e064669e132d33dff1550a04115e444d652e6a9016.jpg) + +![](images/a161a8d8513ed833195c4c100aee5f695621218e2ed9d146e8bfed767581fed2.jpg) + +![](images/a8bb50be6d0378c3ff7785359663533bdb1b943a474452e493eb9d6b8619a26d.jpg) +Figure 3. Speedup by task. Our method excels in math tasks, reaching $4.43 \times$ with Vicuna-7B, while also maintaining strong accelerations in translation, QA, and multi-turn conversation. + +![](images/d99735118e9773746929ea26d308f73493445896b48f53ff9c32bb29ce6fa6f6.jpg) + +![](images/edbfa57e4e7cb16a0c6f339a4448f1ed5d2ee01dd5bb57e2678ac6f524f6e7a4.jpg) +Figure 4. Variance of acceptance length. Speculative sampling (blue) exhibits noticeably lower variance compared to greedy sampling (orange), aligning with our theoretical stability analysis. + +![](images/75dfbee8d1770c85bdcc34b322e3e7373e09e192f69971b6ec9ac455add2cc2a.jpg) + +target model's output distribution. + +In future work, we will extend our findings to more complex parallel computing scenarios by developing distributed speculative sampling systems. We also plan to explore more efficient caching strategies, implement dynamic adaptation of speculation lengths, and validate the framework's general + +ity across models of varying scales (from billions to trillions of parameters), aiming to continuously push the boundaries of efficient LLM inference. + +# Acknowledgements + +This work was supported by the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. U23A20383, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No. 2021J06003, No.2022J06001). + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Agrawal, S., Jeon, W., and Lee, M. Adaedl: Early draft stopping for speculative decoding of large language models via an entropy-based lower bound on token acceptance probability. arXiv preprint arXiv:2410.18351, 2024. +Ankner, Z., Parthasarathy, R., Nrusimha, A., Rinard, C., Ragan-Kelley, J., and Brandon, W. Hydra: Sequentially-dependent draft heads for medusa decoding. arXiv preprint arXiv:2402.05109, 2024. +Anonymous. Judge decoding: Faster speculative sampling requires going beyond model alignment. In The Thirteenth International Conference on Learning Representations, 2025a. URL https://openreview.net/forum?id=mtSSFiqW6y. +Anonymous. Towards optimal multi-draft speculative decoding. In The Thirteenth International Conference on Learning Representations, 2025b. URL https://openreview.net/forum?id=9KxnxWOBA5. +Cai, T., Li, Y., Geng, Z., Peng, H., Lee, J. D., Chen, D., and Dao, T. Medusa: Simple llm inference acceleration framework with multiple decoding heads. arXiv preprint arXiv: 2401.10774, 2024. +Chen, C., Borgeaud, S., Irving, G., Lespiau, J.-B., Sifre, L., and Jumper, J. Accelerating large language model decoding with speculative sampling. arXiv preprint arXiv:2302.01318, 2023a. +Chen, Z., Yang, X., Lin, J., Sun, C., Huang, J., and Chang, K. C.-C. Cascade speculative drafting for even faster llm inference. arXiv preprint arXiv:2312.11462, 2023b. +Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168, 2021. +Du, C., Jiang, J., Yuanchen, X., Wu, J., Yu, S., Li, Y., Li, S., Xu, K., Nie, L., Tu, Z., et al. Glide with a cape: A low-hassle method to accelerate speculative decoding. arXiv preprint arXiv:2402.02082, 2024. +Elhoushi, M., Shrivastava, A., Liskovich, D., Hosmer, B., Wasti, B., Lai, L., Mahmoud, A., Acun, B., Agarwal, S., Roman, A., et al. Layer skip: Enabling early exit inference and self-speculative decoding. arXiv preprint arXiv:2404.16710, 2024. +Gao, X., Xie, W., Xiang, Y., and Ji, F. Falcon: Faster and parallel inference of large language models through enhanced semi-autoregressive drafting and custom-designed decoding tree. arXiv preprint arXiv:2412.12639, 2024. + +Gui, L., Xiao, B., Su, L., and Chen, W. Boosting lossless speculative decoding via feature sampling and partial alignment distillation. arXiv preprint arXiv:2408.15562, 2024. +Jin, H., Han, X., Yang, J., Jiang, Z., Liu, Z., Chang, C.-Y., Chen, H., and Hu, X. Llm maybe longlm: Self-extend llm context window without tuning. arXiv preprint arXiv:2401.01325, 2024. +Karpukhin, V., Oğuz, B., Min, S., Lewis, P., Wu, L., Edunov, S., Chen, D., and Yih, W.-t. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906, 2020. +Khisti, A., Ebrahimi, M. R., Dbouk, H., Behboodi, A., Memisevic, R., and Louizos, C. Multi-draft speculative sampling: Canonical architectures and theoretical limits. arXiv preprint arXiv:2410.18234, 2024. +Kim, S., Mangalam, K., Moon, S., Malik, J., Mahoney, M. W., Gholami, A., and Keutzer, K. Speculative decoding with big little decoder. Advances in Neural Information Processing Systems, 36, 2024a. +Kim, T., Jung, H., and Yun, S.-Y. A unified framework for speculative decoding with multiple drafters as a bandit. +Kim, T., Suresh, A. T., Papineni, K. A., Riley, M., Kumar, S., and Benton, A. Exploring and improving drafts in blockwise parallel decoding. In Workshop on Efficient Systems for Foundation Models II@ ICML2024, 2024b. +Kwiatkowski, T., Palomaki, J., Redfield, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Devlin, J., Lee, K., et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466, 2019. +Leviathan, Y., Kalman, M., and Matias, Y. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pp. 19274-19286. PMLR, 2023. +Li, Y., Wei, F., Zhang, C., and Zhang, H. Eagle-2: Faster inference of language models with dynamic draft trees. arXiv preprint arXiv:2406.16858, 2024. +Liu, T., Li, Y., Lv, Q., Liu, K., Zhu, J., and Hu, W. Parallel speculative decoding with adaptive draft length. arXiv preprint arXiv:2408.11850, 2024. +Lu, X., Zeng, Y., Ma, F., Yu, Z., and Levorato, M. Improving multi-candidate speculative decoding. arXiv preprint arXiv:2409.10644, 2024. +Ma, Y., Li, H., Zheng, X., Ling, F., Xiao, X., Wang, R., Wen, S., Chao, F., and Ji, R. Affinequant: Affine transformation + +quantization for large language models. arXiv preprint arXiv:2403.12544, 2024. +Metel, M. R., Lu, P., Chen, B., Rezagholizadeh, M., and Kobyzev, I. Draft on the fly: Adaptive self-speculative decoding using cosine similarity. arXiv preprint arXiv:2410.01028, 2024. +Miao, X., Oliaro, G., Zhang, Z., Cheng, X., Wang, Z., Zhang, Z., Wong, R. Y. Y., Zhu, A., Yang, L., Shi, X., et al. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pp. 932-949, 2024. +Nallapati, R., Zhou, B., Gulcehre, C., Xiang, B., et al. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023, 2016. +Qin, Z., Hu, Z., He, Z., Prakriya, N., Cong, J., and Sun, Y. Multi-token joint speculative decoding for accelerating large language model inference. arXiv e-prints, pp. arXiv-2407, 2024. +Sadhukhan, R., Chen, J., Chen, Z., Tiwari, V., Lai, R., Shi, J., Yen, I. E.-H., May, A., Chen, T., and Chen, B. Magicdec: Breaking the latency-throughput tradeoff for long context generation with speculative decoding. arXiv preprint arXiv:2408.11049, 2024. +Spector, B. and Ré, C. Staged speculative decoding: Exploiting large language model decoding inefficiencies for inference acceleration. arXiv preprint arXiv:2310.06334, 2023. +Stern, M., Shazeer, N., and Uszkoreit, J. Blockwise parallel decoding for deep autoregressive models. Advances in Neural Information Processing Systems, 31, 2018. +Stewart, L., Trager, M., Gonugondla, S. K., and Soatto, S. The n-grammys: Accelerating autoregressive inference with learning-free batched speculation. arXiv preprint arXiv:2411.03786, 2024. +Sun, H., Chen, Z., Yang, X., Tian, Y., and Chen, B. Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding. arXiv preprint arXiv:2404.11912, 2024. +Svirschevski, R., May, A., Chen, Z., Chen, B., Jia, Z., and Ryabinin, M. Specexec: Massively parallel speculative decoding for interactive lmm inference on consumer devices. arXiv preprint arXiv:2406.02532, 2024. +Teng, Y., Shi, H., Liu, X., Ning, X., Dai, G., Wang, Y., Li, Z., and Liu, X. Accelerating auto-regressive text-to-image generation with training-free speculative jacobi decoding. arXiv preprint arXiv:2410.01699, 2024. + +Xia, H., Ge, T., Wang, P., Chen, S.-Q., Wei, F., and Sui, Z. Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 3909-3925, 2023. +Xia, H., Yang, Z., Dong, Q., Wang, P., Li, Y., Ge, T., Liu, T., Li, W., and Sui, Z. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. arXiv preprint arXiv:2401.07851, 2024. +Xiao, G., Tian, Y., Chen, B., Han, S., and Lewis, M. Efficient streaming language models with attention sinks. In The Twelfth International Conference on Learning Representations, 2023. +Xiao, Z., Zhang, H., Ge, T., Ouyang, S., Ordonez, V., and Yu, D. Parallelspec: Parallel drafter for efficient speculative decoding. arXiv preprint arXiv:2410.05589, 2024. +Yang, Y., Li, Y., Zhang, K., Pu, J., Gao, M., Zhang, T., Shao, R., Wang, W., and Tao, D. Ppd: Prediction-permutation-decoding for fast large language model inference. arXiv preprint arXiv:2312.17344, 2023. +Yi, H., Lin, F., Li, H., Ning, P., Yu, X., and Xiao, R. Generation meets verification: Accelerating large language model inference with smart parallel auto-correct decoding. arXiv preprint arXiv:2402.11809, 2024. +Yin, M., Chen, M., Huang, K., and Wang, M. A theoretical perspective for speculative decoding algorithm. Advances in Neural Information Processing Systems, 37:128082-128117, 2024. +Zafrir, O., Margulis, I., Shteyman, D., and Boudoukh, G. Fastdraft: How to train your draft. arXiv preprint arXiv:2411.11055, 2024. +Zhang, P., Liu, Z., Xiao, S., Shao, N., Ye, Q., and Dou, Z. Soaring from 4k to 400k: Extending lvm's context with activation beacon. arXiv preprint arXiv:2401.03462, 2024a. +Zhang, S., Wang, H., Ma, D., Zhu, Z., Chen, L., Lan, K., and Yu, K. Adaeagle: Optimizing speculative decoding via explicit modeling of adaptive draft structures. arXiv preprint arXiv:2412.18910, 2024b. +Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., Song, Z., Tian, Y., Ré, C., Barrett, C., et al. H2o: Heavy-hitter oracle for efficient generative inference of large language models. Advances in Neural Information Processing Systems, 36, 2024c. +Zhao, Y., Xie, Z., Liang, C., Zhuang, C., and Gu, J. Lookahead: An inference acceleration framework for large language model with lossless generation accuracy. In + +Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 6344-6355, 2024. +Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E., et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36: 46595-46623, 2023. \ No newline at end of file diff --git a/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/images.zip b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..17216c22da1b2ab3e03ef20732779d227c63a19e --- /dev/null +++ b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f203f19de5be5438a025090610225fe8f8af169b1068bf939c2af2022625434 +size 450183 diff --git a/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/layout.json b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0cdb9b7414bcc10f1f2c22eff3168a51200b4d6d --- /dev/null +++ b/ICML/2025/polybasic Speculative Decoding Through a Theoretical Perspective/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46f6a2bd547c6f17012fef4b62d36b6eddd0ce9cb4cab6dab4b93a9c5f0b43dc +size 509632 diff --git a/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_content_list.json b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1246c128d4fd5e3fb9d33be4bd89d03c4d2efc52 --- /dev/null +++ b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc5555eef51f9a9cd7067e467a3c4494beb6315cd70b8f1b589d405371bc18b1 +size 156768 diff --git a/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_model.json b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..751c65a5956870824580a304692b62afb94cb855 --- /dev/null +++ b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df394b20eb3e7e5a0e25ea966bf7dd34f0b50ff23077e8a5a36e62f706d5b739 +size 189701 diff --git a/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_origin.pdf b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..946f380c9f319182bf4c5f7816f26305c3bc0d45 --- /dev/null +++ b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/89161b7c-6df7-4137-b941-fe2dd4396dd2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab591bfc6d2a38e9c107181eba7e617295a58374af05687e9f82dc9f496bf0ab +size 938387 diff --git a/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/full.md b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/full.md new file mode 100644 index 0000000000000000000000000000000000000000..afacf972b8e9e9af6e5c45ca67ec4495d464fc6c --- /dev/null +++ b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/full.md @@ -0,0 +1,791 @@ +# rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking + +Xinyu Guan $^{*12}$ Li Lyna Zhang $^{*1\dagger}$ Yifei Liu $^{31}$ Ning Shang $^{1}$ Youran Sun $^{14}$ Yi Zhu $^{1}$ Fan Yang $^{1}$ Mao Yang $^{1}$ + +# Abstract + +We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStarMath achieves this by exercising "deep thinking" through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-time search guided by an SLM-based process reward model. rStar-Math introduces three innovations to tackle the challenges in training the two SLMs: (1) a novel code-augmented CoT data synthesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories used to train the policy SLM; (2) a novel process reward model training method that avoids naive step-level score annotation, yielding a more effective process preference model (PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built from scratch and iteratively evolved to improve reasoning capabilities. Through 4 rounds of self-evolution with millions of synthesized solutions for 747k math problems, rStar-Math boosts SLMs' math reasoning to state-of-the-art levels. On MATH benchmark, it improves Qwen2.5-Math-7B from $58.8\%$ to $90.0\%$ , surpassing o1-preview by $+4.5\%$ . On the USA Math Olympiad (AIME), rStar-Math solves an average of $53.3\%$ (8/15) of problems, ranking among the top $20\%$ of the brightest high school math students. Code and data are available at https://github.com/microsoft/rStar. + +*Equal contribution ¹Microsoft Research Asia ²Peking University ³University of Science and Technology of China ⁴Tsinghua University; Xinyu Guan, Yifei Liu and Youran Sun did this work during the internship at MSRA. Correspondence to: Li Lyna Zhang . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +
Task (pass@1 Acc)rStar-Math (Qwen-7B)rStar-Math (Phi3-mini)OpenAI o1-previewOpenAI o1-miniGPT-4oDeepSeek-V3
MATH90.088.685.590.076.690.2
AIME 202453.346.744.656.79.339.2
Olympiad Bench65.660.3-65.343.355.4
College Math60.559.3-57.848.558.9
Omni-Math50.548.552.560.549.635.9
+ +Table 1. rStar-Math enables frontier math reasoning in SLMs via deep thinking over 64 trajectories. + +# 1 Introduction + +Recent studies have demonstrated that large language models (LLMs) are capable of tackling mathematical problems (Team, 2024; Yang et al., 2024; Liu et al., 2024). However, the conventional approach of having LLMs generate complete solutions in a single inference - akin to System 1 thinking (Daniel, 2011) - often yields fast but error-prone results (Valmeekam et al., 2023; OpenAI, 2023). In response, test-time compute scaling (Snell et al., 2024; Qi et al., 2024) suggests a paradigm shift toward a System 2-style thinking, which emulates human reasoning through a slower and deeper thought process. In this paradigm, an LLM serves as a policy model to generate multiple math reasoning steps, which are then evaluated by another LLM acting as a reward model (OpenAI, 2024). The steps and solutions deemed more likely to be correct are selected. The process repeats iteratively and ultimately derives the final answer. + +In the test-time compute paradigm, the key is to train a powerful policy model that generates promising solution steps and a reliable reward model that accurately evaluates them, both of which depend on high-quality training data. Unfortunately, it is well-known that off-the-shelf high-quality math reasoning data is scarce, and synthesizing high-quality math data faces fundamental challenges. For the policy model, it is challenging to distinguish erroneous reasoning steps from the correct ones, complicating the elimination of low-quality data. It is worth noting that in math reasoning, a correct final answer does not ensure the correctness of the entire reasoning trace (Lanham et al., 2023). Incorrect intermediate steps significantly decrease data quality. As for the reward model, process reward modeling (PRM) shows a great potential by providing fine-grained feedback on intermediate steps (Lightman et al., 2023). However, the training data is even scarcer in this regard: accurate step-by-step feedback + +requires intense human labeling efforts and is impractical to scale, while those automatic annotation attempts show limited gains due to noisy reward scores (Luo et al., 2024; Chen et al., 2024). Due to the above challenges, existing distill-based data synthesis approaches to training policy models, e.g., scaling up GPT4-distilled CoT data (Tang et al., 2024; Huang et al., 2024), have shown diminishing returns and cannot exceed the capability of their teacher model; meanwhile, as of today, training reliable PRMs for math reasoning remains an open question. + +In this work, we introduce rStar-Math, a self-evolvable System 2-style reasoning approach that achieves the state-of-the-art math reasoning, rivaling and sometimes even surpassing OpenAI o1 on challenging math competition benchmarks with a model size as small as 7 billion. Unlike solutions relying on superior LLMs for data synthesis, rStar-Math leverages smaller language models (SLMs) with Monte Carlo Tree Search (MCTS) to establish a self-evolutionary process, iteratively generating higher-quality training data. To achieve self-evolution, rStar-Math introduces three key innovations. + +First, a novel code-augmented CoT data synthesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories with self-annotated MCTS $Q$ -values. Specifically, math problem-solving is decomposed into multi-step generation within MCTS. At each step, the SLM serving as the policy model samples candidate nodes, each generating a one-step CoT and the corresponding Python code. To verify the generation quality, only nodes with successful Python code execution are retained, thus mitigating errors in intermediate steps. Moreover, extensive MCTS rollouts automatically assign a Q-value to each intermediate step based on its contribution: steps contributing to more trajectories that lead to the correct answer are given higher Q-values and considered higher quality. This ensures that the reasoning trajectories generated by SLMs consist of correct, high-quality intermediate steps. + +Second, a novel method that trains an SLM acting as a process preference model, i.e., a PPM to implement the desired PRM, that reliably predicts a reward label for each math reasoning step. The PPM leverages the fact that, although Q-values are still not precise enough to score each reasoning step despite using extensive MCTS rollouts, the Q-values can reliably distinguish positive (correct) steps from negative (irrelevant/incorrect) ones. Thus the training method constructs preference pairs for each step based on Q-values and uses a pairwise ranking loss (Ouyang et al., 2022) to optimize PPM's score prediction for each reasoning step, achieving reliable labeling. This approach avoids conventional methods that directly use Q-values as reward labels (Luo et al., 2024; Chen et al., 2024), which are inherently noisy and imprecise in stepwise reward assignment. + +Finally, a four-round self-evolution recipe that progressively builds both a frontier policy model and PPM from scratch. We begin by curating a dataset of 747k math word problems from publicly available sources. In each round, we use the latest policy model and PPM to perform MCTS, generating increasingly high-quality training data using the above two methods to train a stronger policy model and PPM for next round. Each round achieves progressive refinement: (1) a stronger policy SLM, (2) a more reliable PPM, (3) generating better reasoning trajectories via PPM-augmented MCTS, and (4) improving training data coverage to tackle more challenging competition-level math problems. + +Extensive experiments across four SLMs (1.5B-7B) and seven math reasoning tasks demonstrate the effectiveness of rStar-Math. Remarkably, rStar-Math improves all four SLMs, matching or even surpassing OpenAI o1 on challenging math benchmarks. On MATH benchmark, with 8 search trajectories, rStar-Math boosts Qwen2.5-Math7B from $58.8\%$ to $89.4\%$ and Qwen2.5-Math-1.5B from $51.2\%$ to $87.8\%$ . With 64 trajectories, the scores rise to $90\%$ and $88.4\%$ , outperforming o1-preview by $4.5\%$ and $2.6\%$ and matching o1-mini's $90\%$ . On the Olympiad-level AIME 2024, rStar-Math solves on average $53.3\%$ (8/15) of the problems, exceeding o1-preview by $8.7\%$ and all other open-sourced LLMs. We further conduct comprehensive experiments to verify the superiority of step-by-step verified reasoning trajectories over state-of-the-art data synthesis baselines, as well as the PPM's effectiveness compared to outcome reward models and Q value-based PRMs. + +# 2 Related Works + +Math Data Synthesis. Advancements in LLM math reasoning have largely relied on curating high-quality CoT data, with most leading approaches being GPT-distilled, using frontier models like GPT-4 for synthesis (Wang et al., 2024b; Gou et al., 2023; Luo et al., 2023). Notable works include NuminaMath (Jia LI & Polu, 2024a) and MetaMath (Yu et al., 2023b). While effective, this limits reasoning to the capabilities of the teacher LLM. Hard problems that the teacher LLM cannot solve are excluded in the training set. Even solvable problems may contain error-prone intermediate steps, which are hard to detect. Although rejection sampling methods (Yuan et al., 2023; Brown et al., 2024) can improve data quality, they do not guarantee correct intermediate steps. As a result, naively scaling up CoT data generation using frontier LLMs yields diminishing returns, with performance gains approaching saturation. + +Scaling Test-time Compute has introduced new scaling laws, allowing LLMs to improve performance across by generating multiple samples and using reward models for best-solution selection (Snell et al., 2024; Wu et al., 2024; Brown et al., 2024). Various search methods have been proposed (Kang et al., 2024; Wang et al., 2024a), including + +![](images/6e59b216179a83be8467e733e57061379ed9e6fe55912bfd8d9012b0c5a1147a.jpg) +(a) step-by-step verified reasoning trajectory + +![](images/4184f8d8262f4a612ddc5a669bbf76a6ea9716b8778169b01072306ed3ab9f69.jpg) +(c) 4 rounds of self-evolution +Figure 1. The overview of rStar-Math. + +random sampling (Wang et al., 2023) and tree-search methods (Yao et al., 2024; Zhang et al., 2024a; Qi et al., 2024) like MCTS. However, open-source methods for scaling test-time compute have shown limited gains in math reasoning, often due to policy LLM or reward model limitations. + +Reward Models are crucial for effective System 2 reasoning but are challenging to obtain. The major works include Outcome Reward Model (Yang et al., 2024; Yu et al., 2023a) and Process Reward Model (PRM) (Lightman et al., 2024). While PRMs offer promising dense, step-level reward signals for complex reasoning (Luo et al., 2024), collecting step-level annotations remains an obstacle. While (Kang et al., 2024; Wang et al., 2024a) rely on costly human-annotated datasets like PRM800k (Lightman et al., 2024), recent approaches (Wang et al., 2024c; Luo et al., 2024) explore automated annotation via Monte Carlo Sampling or MCTS. However, they struggle to generate precise reward scores, limiting performance gains. rStar-Math introduces a novel process preference reward (PPM) that eliminates the need for accurate step-level reward score annotation. + +# 3 Methodology + +In our work, we explore using two 7B SLMs to generate higher-quality training data. However, self-generating data presents significant challenges for SLMs. SLMs frequently fail to generate correct solutions, and even when the final answer is correct, the intermediate steps are often flawed or of poor quality. Moreover, SLMs solve fewer challenging problems compared to advanced models like GPT-4. + +This section introduces our methodology, as illustrated in Fig. 1. To mitigate errors and low-quality intermediate steps, we introduce a code-augmented CoT synthetic method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories, annotated with Q-values. To further improve SLM performance on challenging problems, we introduce a four-round self-evolution recipe. In each round, both the policy SLM and the reward model are updated to stronger versions, progressively tackling more difficult problems and generating higher-quality + +training data. Finally, we present a novel process reward model training approach that eliminates the need for precise per-step reward annotations, yielding the more effective process preference model (PPM). + +# 3.1 Step-by-Step Verified Reasoning Trajectory + +Given a problem $x$ and a policy model $M$ , we use standard MCTS to incrementally construct a search tree for step-by-step solution exploration. As shown in Fig. 1(a), the root node represents question $x$ , and child nodes correspond to intermediate steps $s$ generated by $M$ . A root-to-leaf path ending at terminal node $s_d$ forms a trajectory $\mathbf{t} = x \oplus s_1 \oplus s_2 \oplus \ldots \oplus s_d$ , with each step $s_i$ assigned a Q-value $Q(s_i)$ . From the search tree $\mathcal{T}$ , we extract solution trajectories $\mathbb{T} = \{\mathbf{t}^1, \mathbf{t}^2, \dots, \mathbf{t}^n\} (n \geq 1)$ . Our goal is to select high-quality trajectories from $\mathcal{T}$ to construct the training set. To achieve this, we introduce code-augmented CoT synthesis method to filter out low-quality generations and perform extensive rollouts to improve the reliability of Q-value accuracy. + +Code-augmented CoT Generation. Prior MCTS approaches primarily generate natural language (NL) CoTs (Qi et al., 2024; Zhang et al., 2024b). However, LLMs often suffer from hallucination, producing incorrect or irrelevant steps yet still arrive at the correct answer by chance (Lanham et al., 2023). These flawed steps are challenging to detect and eliminate. To address this, we propose a novel code execution augmented CoT. As shown in Fig. 2, the policy model generates a one-step NL CoT alongside its corresponding Python code, where the NL CoT is embedded as a Python comment. Only generations with successfully executed Python code are retained as valid candidates. + +Specifically, starting from the root node $x$ , we perform multiple MCTS iterations through selection, expansion, simulation, and back-propagation. At step $i$ , we collect the latest reasoning trajectory $x \oplus s_1 \oplus s_2 \oplus \ldots \oplus s_{i-1}$ as the current state. Based on this state, we prompt (see Appendix A.4) the policy model to generate $n$ candidates $s_{i,0}, \ldots, s_{i,n-1}$ . Python code execution is then employed to filter valid nodes. As shown in Fig. 2, each generation + +Question: Bill walks \(\$ 1\)/frac{1}{2}\( mile south, then \)_1\( frac{3}{4}\( mile east, and finally \)_1\( frac{1}{2}\( mile south. How many miles is he, in a direct line, from his starting point? Express your answer as a decimal to the nearest hundredth. + +Step 1: Calculate the total distance walked south $\rightarrow$ NL CoT as Python Comment +total_south $= 1 / 2 + 1 / 2$ +# Step 2: Calculate the total distance walked east +total_east $= 3 / 4$ +# Step 3: Use the Pythagorean theorem to find the direct distance from the starting point +import math +direct_distance $=$ math.sqrt(total_south\*\*2+total_east\*\*2) +# Step 4: Round the direct distance to the nearest hundredth +direct_distance_rounded $=$ round(direct_distance,2 +From the result, we can see that the direct distance from the starting point is \\boxed{1.25} m + +Python code execution for step 1: + +Step 1: Calculate the total distance walked south total_south $= 1 / 2 + 1 / 2$ + +Python code execution for step 2: + +Step 1: Calculate the total distance walked south total_south $= 1 / 2 + 1 / 2$ + +Step 2: Calculate the total distance walked east total_east $= 3 / 4$ + +Figure 2. An example of Code-augmented CoT. To avoid syntax errors, the code at step $i$ is executed by concatenating it with code from all previous steps. For example, step 2 includes code from step 1 to ensure correct execution. + +$s_{i,j}$ is concatenated with the code from all previous steps, forming $s_1\oplus s_2\oplus \ldots \oplus s_{i - 1}\oplus s_{i,j}$ . Candidates that execute successfully are retained as valid nodes and scored by the PPM, which assigns a Q-value $q(s_i)$ . Then, we use the well-known Upper Confidence bounds for Trees (UCT) (Kocsis & Szepesvári, 2006) to select the best node among the $n$ candidates. This selection is mathematically represented as: + +$$ +\mathrm {U C T} (s) = Q (s) + c \sqrt {\frac {\ln N _ {p a r e n t} (s)}{N (s)}}; \quad Q (s) = \frac {q (s)}{N (s)} \tag {1} +$$ + +where $N(s)$ denotes the number of visits to node $s$ , and $N_{\text{parent}}(s)$ is the visit count of $s$ 's parent node. The predicted reward $q(s)$ is provided by the PPM and will be updated through back-propagation. $c$ is a constant that balances exploitation and exploration. + +We repeat the process until reaching a terminal node, either by arriving at a final answer or hitting the maximum tree depth. This is referred to as a rollout. Based on whether the rollout reaches the correct answer, we perform back-propagation to update the trajectory Q-value scores. We introduce the score annotation methods in next sections. + +Extensive Rollouts for Q-value Annotation. Accurate Q-value $Q(s)$ annotation in Eq. 1 is crucial for guiding MCTS towards correct problem-solving paths and identifying high-quality steps within trajectories. Following AlphaGo (Silver et al., 2017) and rStar (Qi et al., 2024), we perform MCTS rollout to assign a Q-value to each step. However, insufficient rollouts can lead to spurious Q-value assignments, such as overestimating suboptimal steps. To mitigate this issue, we draw inspiration from Go players, who refine their evaluations for each move through repeated gameplay. Analogously, within each rollout, we update the Q-value of each step based on its contribution to achieving the correct final answer. Through extensive MCTS rollouts, steps consistently leading to correct answers achieve higher Q-values, occasional successes yield moderate Q-values, and consistently incorrect steps receive low Q-values. Specifically, we introduce two self-annotation methods to obtain these step-level Q-values. Fig. 1(c) shows the detailed setting in the four rounds of self-evolution. + +Terminal-guided annotation. During the first two rounds, + +when the PPM is unavailable or insufficiently accurate, we use terminal-guided annotation. Formally, let $q(s_i)^k$ denote the q value for step $s_i$ after back-propagation in the $k^{th}$ rollout, we score per-step Q-value as follows: + +$$ +q \left(s _ {i}\right) ^ {k} = q \left(s _ {i}\right) ^ {k - 1} + q \left(s _ {d}\right) ^ {k}; \tag {2} +$$ + +where the initial q value $q(s_i)^0 = 0$ in the first rollout. If this step frequently leads to a correct answer, its q value will increase; otherwise, it decreases. Terminal nodes (denoted as $s_d$ in Eq. 2) are scored as $q(s_d) = 1$ for correct answers and $q(s_d) = -1$ otherwise, as shown in Fig. 1. + +PPM-augmented annotation. Starting from the third round, we use PPM to score each step for more effective generation. Compared to terminal-guided annotation, which requires multiple rollouts for a meaningful $q$ value, PPM directly predicts a non-zero initial $q$ value. PPM also helps the policy model to generate higher-quality steps, guiding solutions towards correct paths. Formally, for step $s_i$ , PPM predicts an initial $q(s_i)^0$ value based on the partial trajectory: + +$$ +q \left(s _ {i}\right) ^ {0} = P P M \left(x \oplus s _ {1} \oplus s _ {2} \oplus \dots \oplus s _ {i - 1} \oplus s _ {i}\right) \tag {3} +$$ + +This $q$ value will be updated based on terminal node's $q(s_d)$ value through MCTS back-propagation in Eq. 2. For terminal node $s_d$ , we do not use PPM for scoring during training data generation. Instead, we assign a more accurate score based on ground truth labels as terminal-guided rewarding. + +# 3.2 Process Preference Model + +Process reward models, which provide granular step-level reward signals, is highly desirable for solving challenging math problems. However, obtaining high-quality step-level training data remains an open challenge. Existing methods rely on human annotations (Lightman et al., 2023) or MCTS-generated scores (Zhang et al., 2024b; Chen et al., 2024) to label a score for each step. These scores then serve as training targets, with methods such as MSE loss (Chen et al., 2024) or pointwise loss (Wang et al., 2024c; Luo et al., 2024; Zhang et al., 2024b) used to minimize the difference between predicted and labeled scores. As a result, the precision of these annotated step-level scores determines the effectiveness of the final process reward model. + +Unfortunately, precise per-step scoring remains a unsolved challenge. Although our extensive MCTS rollouts improve the reliability of Q-values, precisely evaluating fine-grained step quality presents a major obstacle. For instance, among a set of correct steps, it is difficult to rank them as best, second-best, or average and then assign precise scores. Similarly, among incorrect steps, differentiating the worst from moderately poor steps poses analogous challenges. Even expert human annotation struggles with consistency, particularly at scale, leading to inherent noise in training labels. + +We introduce a novel training method that trains a process preference model (PPM) by constructing step-level positive-negative preference pairs. As shown in Fig. 1(b), instead of using Q-values as direct reward labels, we use them to select steps from MCTS tree for preference pair construction. For each step, we select two candidates with the highest Q-values as positive steps and two with the lowest as negative steps. Critically, the selected positive steps must lead to a correct final answer, while negative steps must lead to incorrect answers. For intermediate steps (except the final answer step), the positive and negative pairs share the same preceding steps. For the final answer step, where identical reasoning trajectories rarely yield different final answers, we relax this restriction. We select two correct trajectories with the highest average Q-values as positive examples and two incorrect trajectories with the lowest average Q-values as negative examples. Following (Ouyang et al., 2022), we define our loss function using the standard Bradley-Terry model with a pairwise ranking loss: + +$$ +\mathcal {L} _ {p p m} (\theta) = - \frac {1}{4} E _ {\left(x, y _ {i} ^ {p o s}, y _ {i} ^ {n e g} \in \mathbb {D}\right)} \left[ l o g \left(\sigma \left(r _ {\theta} \left(x, y _ {i} ^ {p o s}\right) - r _ {\theta} \left(x, y _ {i} ^ {n e g}\right)\right)\right) \right] +$$ + +when $i$ is not final answer step, $\left\{ \begin{array}{l}y_i^{pos} = s_1\oplus \ldots \oplus s_{i - 1}\oplus s_i^{pos};\\ y_i^{neg} = s_1\oplus \ldots \oplus s_{i - 1}\oplus s_i^{neg} \end{array} \right.$ (4) + +$r_{\theta}(x,y_i)$ denotes the PPM output, where $x$ is the problem and $y$ is the trajectory from the first step to the $i^{th}$ step. $\sigma$ denotes the sigmoid function. + +# 3.3 Self-Evolved Deep Thinking + +Math Problems Collection. We collect a large dataset of 747k math word problems with ground-truth answers, primarily from NuminaMath (Jia LI & Polu, 2024a) and MetaMath (Yu et al., 2023b). Notably, only competition-level problems (e.g., Olympiads and AIME/AMC) from NuminaMath are included, as we observe that grade-school-level problems do not significantly improve LLM complex math reasoning. To augment the limited competition-level problems, we follow (Li et al., 2024) and use GPT-4 to synthesize new problems based on the seed problems in 7.5k MATH train set and 3.6k AMC-AIME training split. However, GPT-4 often generated unsolvable problems or incorrect solutions for challenging seed problems. To filter these, we prompt GPT-4 to generate 10 solutions per problem, retaining only those with at least 3 consistent solutions. + +Table 2. Percentage of the 747k math problems correctly solved in each round. The first round uses DeepSeek-Coder-Instruct as the policy LLM, while later rounds use our fine-tuned 7B policy SLM. + +
#models in MCTSGSM-levelMATH-levelOlympiad-levelAll
Round 1DeepSeek-Coder-V2-Instruct96.61%67.36%20.99%60.17%
Round 2policy SLM-r197.88%67.40%56.04%66.60%
Round 3policy SLM-r2, PPM-r298.15%88.69%62.16%77.86%
Round 4policy SLM-r3, PPM-r398.15%94.53%80.58%90.25%
+ +Reasoning Trajectories Collection. Instead of using the original solutions in the 747k math dataset, we conduct extensive MCTS rollouts (Sec. 3.1) to generate higher-quality step-by-step verified reasoning trajectories. In each self-evolution round, we perform 16 rollouts per math problem, which leads to 16 reasoning trajectories. Problems are then categorized by difficulty based on the correct ratio of the generated trajectories: easy (all solutions are correct), medium (a mix of correct and incorrect solutions) and hard (all solutions are incorrect). For hard problems with no correct trajectories, an additional MCTS with 16 rollouts is performed. After that, all step-by-step trajectories and their annotated Q-values are collected and filtered to train the policy SLM and process preference model. + +Self-evolution Recipe. Due to the weaker capabilities of SLMs, we perform four rounds of self-evolution to progressively generate higher-quality data. Each round uses MCTS to generate step-by-step verified reasoning trajectories, which are then used to train the new policy SLM and PPM. The new models are then used in next round. Fig. 1(c) and Table 2 detail the models used for data generation in each round, along with the new trained models. Next, we outline the specific improvements targeted in each round. + +$\diamond$ Round 1: Bootstrapping an initial strong policy SLM-r1. To enable SLMs to self-generate reasonably good training data, we perform a bootstrap round to fine-tune an initial strong policy model. As shown in Table 2, we run MCTS with DeepSeek-Coder-V2-Instruct (236B) to collect the SFT data. With no available reward model in this round, we use terminal-guided annotation for Q-values and limit MCTS to 8 rollouts for efficiency. For correct solutions, the top-2 trajectories with the highest average Q-values are selected as SFT data, which are then used to SFT the first policy SLM, denoted as SLM-r1. + +$\diamond$ Round 2: Training a reliable PPM-r2. In this round, with the policy model updated to the 7B SLM-r1, we conduct extensive MCTS rollouts for more reliable Q-value annotation and train the first reliable reward model, PPM-r2. Specifically, we perform 16 MCTS rollouts per problem. The resulting step-by-step verified reasoning trajectories show significant improvements in both quality and Q-value precision. As shown in Appendix Table 10, PPM-r2 is notably more effective than in the bootstrap round. + +Round 3: PPM-augmented MCTS to significantly improve data quality. With the reliable PPM-r2, we perform + +PPM-augmented MCTS in this round to generate data, leading to significantly higher-quality trajectories. The generated reasoning trajectories and annotated Q-values are then used to train the new policy SLM-r3 and PPM-r3, both of which show significant improvements. + +Round 4: Solving challenging problems. After the third round, while grade school and MATH problems achieve high success rates, only $62.16\%$ of Olympiad-level problems are included in the training set. This is NOT solely due to weak capabilities in our SLMs, as many Olympiad problems remain unsolved by GPT-4 or o1. To improve coverage, we adopt a straightforward strategy. For unsolved problems after 16 rollouts, we perform an additional 64 rollouts, and if needed, increase to 128. We also conduct multiple MCTS tree expansions with different random seeds. This boosts the success rate of Olympiad-level problems to $80.58\%$ . + +After four rounds of self-evolution, $90.25\%$ of the 747k math problems are covered in the training set, as shown in Table 2. Among the remaining unsolved problems, most are synthetic. A manual review of a random sample of 20 problems reveals that 19 are incorrectly labeled. Thus, we conclude that the unsolved problems are of low quality and stop the self-evolution at round 4. + +# 4 Evaluation + +# 4.1 Setup + +Evaluation Datasets. We evaluate rStar-Math on diverse mathematical benchmarks. In addition to the widely-used GSM8K (Cobbe et al., 2021), we include challenging benchmarks from multiple domains: (i) competition and Olympiad-level benchmarks, such as MATH-500 (Lightman et al., 2023), AIME 2024 (AI-MO, 2024a), AMC 2023 (AI-MO, 2024b) and Olympiad Bench (He et al., 2024). Specifically, AIME is the exams designed to challenge the brightest high school math students in America; (ii) college-level math problems from College Math (Tang et al., 2024) and (iii) out-of-domain math benchmark: GaoKao (Chinese College Entrance Exam) En 2023 (Liao et al., 2024). + +Base Models and Setup. We use SLMs of different sizes as the base policy models: Qwen2.5-Math1.5B (Qwen, 2024b), Phi3-mini-Instruct (3B) (Abdin et al., 2024), Qwen2-Math-7B (Qwen, 2024a) and Qwen2.5-Math7B (Qwen, 2024c). Due to limited GPU resources, we performed 4 rounds of self-evolution exclusively on Qwen2.5-Math-7B, yielding 4 evolved policy SLMs (Table 9) and 4 PPMs (Table 10). For the other 3 policy models, we fine-tune them using step-by-step verified trajectories generated from Qwen2.5-Math-7B's 4th round. PPM from this round is then used as the reward model for the 3 policy SLMs. + +Baselines. We compare against three strong baselines representing both System 1 and System 2 approaches: (i) Fron + +tier LLMs, including GPT-4o, the latest Claude, OpenAI o1-preview and o1-mini. We measure their accuracy on AMC 2023, Olympiad Bench, College Math, Gaokao and GSM8K, with accuracy numbers for other benchmarks are taken from public technical reports (Team, 2024). (ii) Open-sourced superior reasoning models; (iii) Both System 1 and System 2 performance of the base models trained from the original models teams, including Instruct versions (e.g., Qwen2.5-Math-7B-Instruct) and Best-of-N (e.g., Qwen2.5-Math-72B-Instruct+Qwen2.5-Math-RM-72B). Notably, the reward model used for the three Qwen base models is a 72B ORM, significantly larger than our 7B PPM. + +Evaluation Metric. We report Pass@1 accuracy for all baselines. For System 2 baselines, we use default evaluation settings, such as default thinking time for o1-mini and o1-preview. For Qwen models with Best-of-N, we re-evaluate MATH-500 and AIME/AMC accuracy; other benchmarks results are from their technical reports. For a fair comparison, rStar-Math run MCTS to generate the same number of solutions as Qwen. Specifically, we generate 16 trajectories for AIME/AMC and 8 for other benchmarks, using PPM to select the best solution. We also report performance with increased test-time computation using 64 trajectories, denoted as rStar-Math $^{64}$ . + +# 4.2 Main Results + +Results on diverse challenging math benchmarks. Table 3 shows the results of rStar-Math with comparing to state-of-the-art reasoning models. We highlight two key observations: (1) rStar-Math significantly improves SLMs math reasoning capabilities, achieving performance comparable to or surpassing OpenAI o1 with substantially smaller model size (1.5B-7B). For example, Qwen2.5-Math-7B, originally at $58.8\%$ accuracy on MATH, improved dramatically to $90.0\%$ with rStar-Math, outperforming o1-preview and Claude 3.5 Sonnet while matching o1-mini. On the College Math benchmark, rStar-Math exceeds o1-mini by $2.7\%$ . On AIME 2024, rStar-Math scored $53.3\%$ , ranking just below o1-mini, with the 7B model solving 8/15 problems in both AIME I and II, placing in the top $20\%$ of the brightest high school math students. Notably, 8 of the unsolved problems were geometry-based, requiring visual understanding, a capability rStar-Math currently does not support. (2) Despite using smaller policy models (1.5B-7B) and reward models (7B), rStar-Math significantly outperforms state-of-the-art System 2 baselines. Compared to Qwen Best-of-N baselines, which use the same base models (Qwen2-Math-7B, Qwen2.5-Math-1.5B/7B) but a $10\times$ larger reward model (Qwen2.5-Math-RM-72B), rStarMath consistently improves the reasoning accuracy of all base models to state-of-the-art levels. Even against Best-of-N with a $10\times$ larger Qwen2.5-Math-72B-Instruct policy model, rStar-Math surpasses it on all benchmarks except + +Table 3. The results of rStar-Math and other frontier LLMs on the most challenging math benchmarks. rStar-Math reports Pass@1 accuracy when searching 16 trajectories for AIME/AMC and 8 for others. rStar-Math64 shows results when sampling 64 trajectories. + +
ModelMethodCompetition and College LevelOOD
MATHAIME 2024AMC 2023Olympiad BenchCollege MathGSM8KGaokaoEn 2023
Frontier LLMs
GPT-4oSystem 176.69.347.543.348.592.967.5
Claude3.5-SonnetSystem 178.316.0---96.4-
GPT-o1-preview-85.544.690.0----
GPT-o1-mini-90.056.795.065.357.894.878.4
Open-Sourced Reasoning LLMs
DeepSeek-Coder-V2-InstructSystem 175.313.357.537.646.294.964.7
NuminaMath-72B-CoTSystem 164.03.370.032.639.790.858.4
LLaMA3.1-70B-InstructSystem 165.423.350.027.742.594.154.0
Qwen2.5-Math-72B-InstructSystem 185.630.070.049.049.595.971.9
Qwen2.5-Math-72B-Instruct+72B ORMSystem 285.836.772.554.550.696.476.9
General Base Model: Phi3-mini-Instruct (3.8B)
Phi3-mini-Instruct (base model)System 141.43.337.512.333.185.737.1
rStar-Math (3.8B SLM+7B PPM)System 285.440.077.559.358.094.577.1
rStar-Math64(3.8B SLM+7B PPM)System 286.443.380.060.359.194.777.7
Math-Specialized Base Model: Qwen2.5-Math-1.5B
Qwen2.5-Math-1.5B (base model)System 151.20.022.516.738.474.646.5
Qwen2.5-Math-1.5B-InstructSystem 160.010.060.038.147.784.865.5
Qwen2.5-Math-1.5B-Instruct+72B ORMSystem 283.420.072.547.350.294.173.0
rStar-Math (1.5B SLM+7B PPM)System 287.846.780.063.559.094.377.7
rStar-Math64(1.5B SLM+7B PPM)System 288.646.785.064.659.394.879.5
Math-Specialized Base Model: Qwen2-Math-7B
Qwen2-Math-7B (base model)System 153.43.325.017.339.480.447.3
Qwen2-Math-7B-InstructSystem 173.213.362.538.245.989.962.1
Qwen2-Math-7B-Instruct+72B ORMSystem 283.423.362.547.647.995.171.9
rStar-Math (7B SLM+7B PPM)System 288.243.380.063.158.494.678.2
rStar-Math64(7B SLM+7B PPM)System 288.646.785.063.459.394.879.2
Math-Specialized Base Model: Qwen2.5-Math-7B
Qwen2.5-Math-7B (base model)System 158.80.022.521.841.691.651.7
Qwen2.5-Math-7B-InstructSystem 182.66.062.541.646.895.266.8
Qwen2.5-Math-7B-Instruct+72B ORMSystem 288.426.775.049.949.697.975.1
rStar-Math (7B SLM+7B PPM)System 289.450.087.565.359.095.080.5
rStar-Math64(7B SLM+7B PPM)System 290.053.387.565.660.595.281.3
+ +Figure 3. Reasoning performance under scaling up the test-time compute. +![](images/13e5b736600808c51302a8f045a014cccca9beb9ba79084a1dc7e11f686705a6.jpg) +o1-preview rStar-Math (7B SLM+7B PPM) Qwen2.5 Best-of-N (72B LLM+72B ORM) o1-mini Qwen2.5 Best-of-N (7B SLM+72B ORM) + +![](images/5118d7f94102e69cdf1c1190d21cff9f2579280b27b7d0ddd4bbd960a1ae56e9.jpg) + +![](images/70852d2347bcfe3f086e3bb378d7c3dcd1f71cfb756049c5bd893cb592f8fba6.jpg) + +![](images/98461371019cf064dc0a0c3efe3701c9ca4ac8fdfb767ba9c91d916cf45db931.jpg) + +GSM8K, using the same number of sampled solutions. + +Scaling up test-time computation. By increasing test-time computation, rStar-Math can explore more trajectories, potentially improving performance. In Fig. 3, we show the impact of test-time compute scaling by comparing the accuracy of the official Qwen Best-of-N across different numbers of sampled trajectories on four challenging math benchmarks. Sampling only one trajectory corresponds to the policy LLM's Pass@1 accuracy, indicating a fallback to System 1 reasoning. We highlight two key observations: (1) With only 4 trajectories, rStar-Math significantly outperforms Best-of-N baselines, exceeding o1-preview and approaching o1-mini, demonstrating its effectiveness. (2) Scaling test-time compute improves reasoning accuracy across all benchmarks, though with varying trends. On Math, AIME, and Olympiad Bench, rStar-Math shows saturation or slow improvement at 64 trajectories, while on College Math, performance continues to improve steadily. + +# 4.3 Ablation Study and Analysis + +Table 4. The continuously improved math reasoning capabilities via rStar-Math self-evolved deep thinking. Starting from round 2, the 7B base model powered by rStar-Math surpasses GPT-4o. + +
Round#MATHAIMEAMCOlympiad +BenchCollege +MathGSM8KGaokao
GPT-4o76.69.347.543.348.592.967.5
Base 7B model58.80.022.521.841.691.651.7
rStar-Math Round 175.210.057.535.745.490.960.3
rStar-Math Round 286.643.375.059.455.694.076.4
rStar-Math Round 387.046.780.061.656.594.277.1
rStar-Math Round 489.450.087.565.359.095.080.5
+ +The effectiveness of self-evolution. The impressive results in Table 3 are achieved after 4 rounds of rStar-Math self-evolved deep thinking. Table 4 details the performance across rounds, showing continuous accuracy improvements. In round 1, the main improvement comes from applying SFT to the base model. Round 2 brings a significant boost with the application of a stronger PPM in MCTS, which unlocks the full potential of System 2 deep reasoning. Notably, starting from round 2, rStar-Math outperforms GPT-4o. Rounds 3 and 4 show further improvements, driven by stronger System 2 reasoning through better policy SLMs and PPMs. + +The effectiveness of step-by-step verified reasoning trajectory. rStar-Math generates step-by-step verified reasoning trajectories, which eliminate error intermediate steps and further expand training set with more challenging problems. To evaluate its effectiveness, we use the data generated from round 4 as SFT training data and compare it against three strong baselines: (i) GPT-distillation, which includes open-sourced CoT solutions synthesized using GPT-4, such as MetaMath (Yu et al., 2023b), NuminaMath-CoT (Jia LI & Polu, 2024b); (ii) Random sampling from self-generation, which use the same policy model (i.e., policy SLM-r3) to randomly generate trajectories; (iii) Rejection sampling, + +where 32 trajectories are randomly sampled from the policy model, with high-quality solutions ranked by our trained ORM (appendix A.2). For fairness, we select two correct trajectories for each math problem in baseline (ii) and (iii). All SFT experiments use the same training recipe. + +Table 5. Ablation study on the effectiveness of our step-by-step verified reasoning trajectories as the SFT dataset. We report the SFT accuracy of Qwen2.5-Math-7B fine-tuned with different datasets. + +
DatasetMATHAIMEOlympiad BenchCollege Math
GPT-4o-76.69.343.348.5
GPT4-distillation (Open-sourced)MetaMath55.23.3319.139.2
NuminaMath-CoT69.610.037.243.4
Self-generation by policy SLM-r3Random sample72.410.041.048.0
Rejection sampling73.413.344.750.8
Step-by-step verified (ours)78.426.747.152.5
+ +Table 5 shows the math reasoning accuracy of Qwen2.5-Math-7B fine-tuned on different datasets. We highlight two observations: (i) Fine-tuning with our step-by-step verified trajectories significantly outperforms all other baselines. This is primarily due to our PPM-augmented MCTS for code-augmented CoT synthesis, which provides denser verification during math solution generation. It proves more effective than both random sampling, which lacks verification, and rejection sampling, where ORM provides only sparse verification. (ii) Even randomly sampled code-augmented CoT solutions from our SLM yields comparable or better performance than GPT-4 synthesized NuminaMath and MetaMath datasets. This indicates that our policy SLMs, after rounds of self-evolution, can generate high-quality math solutions. These results demonstrate the huge potential of our method to self-generate higher-quality reasoning data without relying on advanced LLM distillation. + +The effectiveness of PPM. We train both a strong ORM and Q-value score-based PRM (PQM) for comparison. To ensure a fair evaluation, we use the highest-quality training data: the step-by-step verified trajectories generated in round 4, with selected math problems matching those used for PPM training. Similar to PPM, we use step-level Q-values as to select positive and negative trajectories for each math problem. The ORM is trained using a pairwise ranking loss (Ouyang et al., 2022), while the PQM follows (Chen et al., 2024; Zhang et al., 2024b) to use Q-values as reward labels and optimize with MSE loss. Detailed training settings are provided in Appendix A.2. + +Table 6. Ablation study on the reward model. Process reward models (PQM and PPM) outperform ORM, with PPM pushing the frontier of math reasoning capabilities. + +
RMInferenceMATHAIMEOlympiad BenchCollege Math
ol-mini-90.056.765.355.6
ORMBest-of-N82.626.755.155.5
PQMMCTS88.246.762.957.6
PPMMCTS89.450.065.359.0
+ +Table 6 compares the performance of ORM, PQM, and PPM for System 2 reasoning using our final round policy model. ORM provides reward signals only at the end of problem solving, so we use the Best-of-N, while PQM and PPM leverage MCTS-driven search. As shown in Table 6, both PQM and PPM outperform ORM by providing denser step-level reward signals, leading to higher accuracy on complex math reasoning tasks. However, PQM struggles on more challenging benchmarks, such as MATH and Olympiad Bench, due to the inherent imprecision of Q-values. In contrast, PPM constructs step-level preference data for training, enabling our 7B policy model to achieve comparable or superior performance to o1-mini across all benchmarks. + +# 5 Conclusion + +In this work, we present rStar-Math, a self-evolved System 2 deep thinking approach that significantly boosts the math reasoning capabilities of small LLMs, achieving state-of-the-art OpenAI o1-level performance. Extensive experiments across four different-sized SLMs and challenging math benchmarks demonstrate the superiority of rStar-Math, with achieving leading results while outperforming existing math reasoning LLMs and Best-of-N baselines. We observe that there can be further improvements with more challenging math problems, which we leave as future work. + +# Impact Statement + +This paper presents work whose goal is to advance the field of deep learning. While our research focuses on enhancing the mathematical reasoning capabilities of large language models, we acknowledge potential societal implications such as applications in education and automated problem-solving. There are many broader consequences of our work, none of which we feel must be specifically highlighted here. + +# References + +Inequality of arithmetic and geometric means. URL https://artofproblemsolving.com/wiki/index.php/AM-GM_Inequality. +Pythagorean theorem. URL https://en.wikipedia.org/wiki/Pythagorean_theorem. +Shoelace theorem. URL https://artofproblemsolving.com/wiki/index.php/Shoelace_Theorem. +Abdin, M., Jacobs, S. A., Awan, A. A., Aneja, J., Awadallah, A., Awadalla, H., Bach, N., Bahree, A., Bakhtiari, A., Behl, H., et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024. +AI-MO. Aime 2024, 2024a. URL https://huggingface.co/datasets/AI-MO/aimo-validation-aime. + +AI-MO. Amc 2023, 2024b. URL https://huggingface.co/datasets/AI-MO/aimo-validation-amc. +Brown, B., Juravsky, J., Ehrlich, R., Clark, R., Le, Q. V., Ré, C., and Mirhoseini, A. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787, 2024. +Chen, G., Liao, M., Li, C., and Fan, K. Alphamath almost zero: process supervision without process, 2024. +Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. +Daniel, K. Thinking, fast and slow. Macmillan, 2011. +Gou, Z., Shao, Z., Gong, Y., Yang, Y., Huang, M., Duan, N., Chen, W., et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452, 2023. +He, C., Luo, R., Bai, Y., Hu, S., Thai, Z. L., Shen, J., Hu, J., Han, X., Huang, Y., Zhang, Y., et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024. +Huang, J., Chen, X., Mishra, S., Zheng, H. S., Yu, A. W., Song, X., and Zhou, D. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798, 2023. +Huang, Z., Zou, H., Li, X., Liu, Y., Zheng, Y., Chern, E., Xia, S., Qin, Y., Yuan, W., and Liu, P. O1 replication journey - part 2: Surpassing o1-preview through simple distillation big progress or bitter lesson? *Github*, 2024. URL https://github.com/GAIR-NLP/01-Journey. +Jia LI, Edward Beeching, L. T. B. L. R. S. S. C. H. K. R. L. Y. A. J. Z. S. Z. Q. B. D. L. Z. Y. F. G. L. and Polu, S. Numinamath. [https://github.com/project-numina/aimo-progress-prize](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf), 2024a. +Jia LI, Edward Beeching, L. T. B. L. R. S. S. C. H. K. R. L. Y. A. J. Z. S. Z. Q. B. D. L. Z. Y. F. G. L. and Polu, S. Numinamath cot, 2024b. URL https://huggingface.co/datasets/AI-MO/NuminaMath-CoT. +Kang, J., Li, X. Z., Chen, X., Kazemi, A., and Chen, B. Mindstar: Enhancing math reasoning in pre-trained llms at inference time. arXiv preprint arXiv:2405.16265, 2024. + +Kocsis, L. and Szepesvári, C. Bandit based monte-carlo planning. volume 2006, pp. 282-293, 09 2006. ISBN 978-3-540-45375-8. doi: 10.1007/11871842_29. +Kumar, A., Zhuang, V., Agarwal, R., Su, Y., Co-Reyes, J. D., Singh, A., Baumli, K., Iqbal, S., Bishop, C., Roelofs, R., et al. Training language models to self-correct via reinforcement learning. arXiv preprint arXiv:2409.12917, 2024. +Lanham, T., Chen, A., Radhakrishnan, A., Steiner, B., Denison, C., Hernandez, D., Li, D., Durmus, E., Hubinger, E., Kernion, J., et al. Measuring faithfulness in chain-of-thought reasoning. arXiv preprint arXiv:2307.13702, 2023. +Li, C., Wang, W., Hu, J., Wei, Y., Zheng, N., Hu, H., Zhang, Z., and Peng, H. Common 7b language models already possess strong math capabilities. arXiv preprint arXiv:2403.04706, 2024. +Liao, M., Luo, W., Li, C., Wu, J., and Fan, K. Mario: Math reasoning with code interpreter output-a reproducible pipeline. arXiv preprint arXiv:2401.08190, 2024. +Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K. Let's verify step by step. arXiv preprint arXiv:2305.20050, 2023. +Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=v8L0pN6E0i. +Liu, A., Feng, B., Xue, B., Wang, B., Wu, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437, 2024. +Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., and Zhang, D. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023. +Luo, L., Liu, Y., Liu, R., Phatale, S., Lara, H., Li, Y., Shu, L., Zhu, Y., Meng, L., Sun, J., et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592, 2024. +Noam Brown, I. A. and Lightman, H. Openai's noam brown, ilge akkaya and hunter lightman on o1 and teaching llms to reason better, 2024. URL https://www.youtube.com/watch?v=jPluSXJpdrA. +OpenAI. Gpt-4 technical report. 2023. + +OpenAI. Openai o1 system card. preprint, 2024. +Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022. +Qi, Z., Ma, M., Xu, J., Zhang, L. L., Yang, F., and Yang, M. Mutual reasoning makes smaller llms stronger problem-solvers. arXiv preprint arXiv:2408.06195, 2024. +Qwen. Qwen2-math-7b, 2024a. URL https://huggingface.co/Qwen/Qwen2-Math-7B. +Qwen. Qwen2.5-math-1.5b, 2024b. URL https://huggingface.co/Qwen/Qwen2.5-Math-1.5B. +Qwen. Qwen2.5-math-7b, 2024c. URL https://huggingface.co/Qwen/Qwen2.5-Math-7B. +Renze, M. and Guven, E. Self-reflection in llm agents: Effects on problem-solving performance. arXiv preprint arXiv:2405.06682, 2024. +Shinn, N., Cassano, F., Gopinath, A., Narasimhan, K., and Yao, S. Reflection: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36, 2024. +Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017. +Snell, C., Lee, J., Xu, K., and Kumar, A. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024. +Tang, Z., Zhang, X., Wan, B., and Wei, F. Mathscale: Scaling instruction tuning for mathematical reasoning. arXiv preprint arXiv:2403.02884, 2024. +Team, Q. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https://qwenlm.github.io/blog/qwq-32b-preview/. +Valmeekam, K., Sreedharan, S., Marquez, M., Olmo, A., and Kambhampati, S. On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706, 2023. +Wang, C., Deng, Y., Lv, Z., Yan, S., and Bo, A. Q*: Improving multi-step reasoning for llms with deliberative planning, 2024a. + +Wang, K., Ren, H., Zhou, A., Lu, Z., Luo, S., Shi, W., Zhang, R., Song, L., Zhan, M., and Li, H. Math-coder: Seamless code integration in LLMs for enhanced mathematical reasoning. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=z8TW0ttBPp. +Wang, P., Li, L., Shao, Z., Xu, R. X., Dai, D., Li, Y., Chen, D., Wu, Y., and Sui, Z. Math-shepherd: Verify and reinforce llms step-by-step without human annotations, 2024c. +Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi, E. H., Narang, S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=1PL1NIMMrw. +Weisstein, E. W. Fermat's little theorem, a. URL https://mathworld.wolfram.com/FermatsLittleTheorem.html. +Weisstein, E. W. Vieta's formulas, from mathworld—a wolfram web resource, b. URL http://mathworld.wolfram.com/Tree.html. +Wu, Y., Sun, Z., Li, S., Welleck, S., and Yang, Y. An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724, 2024. +Xin, H., Guo, D., Shao, Z., Ren, Z., Zhu, Q., Liu, B., Ruan, C., Li, W., and Liang, X. Deepseek-prover: Advancing theorem proving in llms through large-scale synthetic data. arXiv preprint arXiv:2405.14333, 2024. + +Yang, A., Zhang, B., Hui, B., Gao, B., Yu, B., Li, C., Liu, D., Tu, J., Zhou, J., Lin, J., et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024. +Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36, 2024. +Yu, F., Gao, A., and Wang, B. Outcome-supervised verifiers for planning in mathematical reasoning. arXiv preprint arXiv:2311.09724, 2023a. +Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023b. +Yuan, Z., Yuan, H., Li, C., Dong, G., Lu, K., Tan, C., Zhou, C., and Zhou, J. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825, 2023. +Zhang, D., Li, J., Huang, X., Zhou, D., Li, Y., and Ouyang, W. Accessing gpt-4 level mathematical olympiad solutions via monte carlo tree self-refine with llama-3 8b. arXiv preprint arXiv:2406.07394, 2024a. +Zhang, D., Zhoubian, S., Hu, Z., Yue, Y., Dong, Y., and Tang, J. Rest-mcts*: Llm self-training via process reward guided tree search. arXiv preprint arXiv:2406.03816, 2024b. + +# A Appendix + +# A.1 Findings and Discussions + +![](images/894403e640bc1fef66707e8af0e7ce675bbb178fff059b7e9773297f1c66441b.jpg) +Figure 4. An example of intrinsic self-reflection during rStar-Math deep thinking. + +The emergence of intrinsic self-reflection capability. A key breakthrough in OpenAI o1 is its intrinsic self-reflection capability. When the model makes an error, it recognizes the mistake and self-correct with a correct answer (Noam Brown & Lightman, 2024). Yet it has consistently been found to be largely ineffective in open-sourced LLMs. Thus, the community has actively explored various approaches, including self-correction (Huang et al., 2023; Kumar et al., 2024), self-reflection (Renze & Guven, 2024; Shinn et al., 2024), to explicitly train or prompt LLMs to develop such capability. + +In our experiments, we unexpectedly observe that our MCTS-driven deep thinking exhibits self-reflection during problem-solving. As shown in Fig. 4, the model initially formalizes an equation using SymPy in the first three steps, which would lead to an incorrect answer (left branch). Interestingly, in the fourth step (right branch), the policy model recognizes the low quality of its earlier steps and refrains from continuing along the initial problem-solving path. Instead, it backtracks and resolves the problem using a new, simpler approach, ultimately arriving at the correct answer. An additional example of self-correction is provided in Appendix A.3. Notably, no self-reflection training data or prompt was included, suggesting that advanced System 2 reasoning can foster intrinsic self-reflection. + +PPM shapes the reasoning boundary in System 2 deep thinking. Both the policy and reward models are crucial for System 2 deep reasoning. Our experiments show that different policy models achieve similar performance in System 2 reasoning (see Appendix A.2), with the PPM largely determines the upper performance limit. Fig. 5 summarizes the accuracy of policy models of different sizes and the accuracy achieved after applying reward models. We conclude that, despite differences in Pass@1 accuracy due to variations in training methods, data, and model size, the reward model is the key factor in System 2 reasoning. For instance, although the SFT accuracy of rStar-Math-7B is lower than Qwen2.5-Math-72B-Instruct, pairing it with our 7B PPM allows rStar-Math to outperform the 72B policy model with Qwen 72B ORM. Moreover, despite varying Pass@1 accuracy across our three policy SLM sizes, the final reasoning accuracy converges after applying the PPM. + +Figure 5. Pass@1 accuracy of policy models and their accuracy after applying System 2 reasoning with various reward models, shows that reward models primarily determine the final performance. +![](images/0d5ba75163cadebd8e0c8ac682c1ded76c7fe376c29dd0ac1c10f5fc9c45c870.jpg) +rStar 7B PPM improvement + +![](images/2c368553923f7d4e141c839dcb971223d5b09877973948340356b4e3beb0fb11.jpg) +rStar Policy model + +![](images/ee730008b88b892009770fba9116968b55ab6fe507eb8f5337a008da38430f44.jpg) +Qwen 72B Policy model + +![](images/1b56a0bfa2b4201d858129d552cbac205cec21591977eee2b0e618a1063bcec1.jpg) +Qwen 72B ORM improvement + +![](images/a735cc3b41ec4ce50bbc2dd1b0c234d9b0cabf855db9d7268cca30cc5e396772.jpg) + +PPM spots theorem-application steps. When solving challenging math problems, identifying and applying relevant theorems or key conclusions often form the cornerstone of successful problem-solving (Xin et al., 2024). In our experiments, we find that during rStar-Math problem-solving, our PPM effectively identifies critical theorem-application intermediate steps within policy model's deep thinking process. These steps are predicted with high reward scores, guiding the policy model to the correct solution. Appendix A.3 provides examples where the PPM successfully identifies key theorems such as Fermat's little theorem (Weisstein, a), Vieta's formulas (Weisstein, b), the AM-GM inequality (amg), the Pythagorean theorem (pyt), and the Shoelace Theorem (sho), etc. + +Generalization to theorem proving. rStar-Math is a generalizable tool to more challenging math tasks, such as theorem proving, though its current focus is on word problems due to dataset limitations. Nonetheless, rStar-Math demonstrates the ability to prove mathematical statements. As shown in Appendix A.3, it successfully proves an Olympiad-level problem involving Fermat's Little Theorem, providing a step-by-step correct proof through its deep reasoning process. This achievement is primarily due to the policy model's strong reasoning capabilities, as the current PPM, which lacks step-level proof training, cannot yet reliably evaluate intermediate steps. Future work will focus on creating step-level proof datasets to enable the PPM to better solve theorem proving tasks. + +# A.2 Additional Experiments and Details + +Data Generation Details. As detailed in Sec. 3.3, each round starts by self-generating step-by-step verified trajectories for $747\mathrm{k}$ math word problems. The maximum tree depth $d$ is set to 16, with 16 MCTS rollouts conducted per problem by default. At each step, we allow to explore 8 candidate nodes, and the constant $c$ in Eq. 1 is set to 2 to promote greater exploration. In the bootstrap round, due to the large size of the initial policy model (236B), we used smaller parameters: 8 rollouts and 5 candidate nodes per step. To improve the accuracy of solving challenging problems in round 4, we increase the number of candidate nodes to 16 and conduct 2 MCTS tree expansions per problem using different random seeds. Detailed prompts are available in Appendix A.4. + +Training Details. In each round, we collect step-by-step verified trajectories to fine-tune the policy LLM and train the PPM. To reduce noise in synthetic math problems (e.g., incorrect ground-truth answers labeled by GPT-4), we remove synthetic problems with trajectories achieving less than $50\%$ accuracy. Based on our extensive experiments, the policy LLM is fine-tuned from the initial base model in each round, rather than training incrementally on the model from the previous round. All policy SLMs are trained for 2 epochs with a sequence length of 4096 tokens and a batch size of 128. We use AdamW optimizer with a linear learning rate scheduler, setting the initial learning rate to 7e-6 for Qwen models, and a cosine scheduler with an initial learning rate of 5e-6 for Phi3-mini-Instruct. The PPM is trained for 1 epoch with a batch size of 512 and an initial learning rate of 7e-6. + +Table 7. Training cost per round + +
GPUsTraining time
Policy8×MI300X20 hours
PPM8×MI300X15 hours
+ +Training PPM. The PPM is initialized from the fine-tuned policy model, with its next-token prediction head replaced by a scalar-value head consisting of a linear layer and a tanh function to constrain outputs to the range [-1, 1]. We filter out math problems where all solution trajectories are fully correct or incorrect. For problems with mixed outcomes, we select two positive and two negative examples for each step based on Q-values, which are used as preference pairs for training data. + +Training the ORM and PQM. The Outcome Reward Model (ORM) and the Q-value-based Process Reward Model (PQM) share the same model architecture and training parameters with and our PPM. For ORM training, we filter trajectories from math problems containing both correct and incorrect solutions. Specifically, the two trajectories with the highest average Q-values are selected as positive examples, while the two with the lowest are chosen as negative examples. Following Qwen2.5-Math (Yang et al., 2024), we adopt the pairwise ranking loss (Ouyang et al., 2022) to optimize the ORM. To train the PQM, we follow (Chen et al., 2024) to use step-level Q-values as reward labels. Let $\mathbf{x} = x\oplus s_1\oplus s_2\oplus \ldots \oplus s_d$ be the trajectory, with annotated Q-values $\mathbf{Q} = (Q(s_{1}),Q(s_{1}),\dots,Q(s_{d}))$ and predicted Q-values $\mathbf{Q}' = (Q'(s_1),Q'(s_1),\dots,Q'(s_d))$ for each step. To stabilize PQM training, we treat each trajectory as a single training sample and predict Q-values for all steps simultaneously, rather than splitting it into individual per-step samples. Specifically, to predict the Q-value $Q^{\prime}(s_i)$ for step $s_i$ , PQM takes the trajectory from the question up to step $s_i$ (i.e., $x\oplus s_1\oplus s_2\oplus \ldots \oplus s_i$ ) as input and outputs a value between -1 and 1. We use a mean squared error (MSE) loss for PQM training: + +$$ +\mathcal {L} _ {p r m} (\mathbf {x}) = \left\| \mathbf {Q} - \mathbf {Q} ^ {\prime} \right\| ^ {2} \tag {5} +$$ + +Table 8. Inference costs of rStar-Math. We show the average number of generated tokens required to generate a trajectory for a given question. + +
MATHAIME 2024AMC 2023Olympiad BenchCollege MathGSM8KGaokaoEn 2023
545315693145447889450332996375
+ +Table 9. Pass@1 accuracy of the resulting policy SLM in each round, showing continuous improvement until surpassing the bootstrap model. + +
Round#MATHAIME 2024AMC 2023Olympiad BenchCollege MathGSM8KGaokaoEn 2023
DeepSeek-Coder-V2-Instruct (bootstrap model)75.313.357.537.646.294.964.7
Base (Qwen2.5-Math-7B)58.80.022.521.841.691.651.7
policy SLM-r169.63.330.034.744.588.457.4
policy SLM-r273.610.035.039.045.789.159.7
policy SLM-r375.816.745.044.149.689.362.8
policy SLM-r478.426.747.547.152.589.765.7
+ +Table 10. The quality of PPM consistently improves across rounds. The policy model has been fixed with policy SLM-r1 for a fair comparison. + +
Round#MATHAIME 2024AMC 2023Olympiad BenchCollege MathGSM8KGaokaoEn 2023
PPM-r175.210.057.535.745.490.960.3
PPM-r284.126.775.052.754.293.373.0
PPM-r385.233.377.559.555.693.976.6
PPM-r487.043.377.561.556.894.277.8
+ +Inference Setting. In our evaluation, we run multiple MCTS to generate candidate solution trajectories. For each problem, we generate 32 candidate nodes at each step and use the PPM to score each node. Since the PPM is effective at providing step-level quality evaluations, we did not perform multiple MCTS rollouts to update per-step Q-values. After each MCTS finish, we select the trajectory with highest PPM score as the final answer. Table 8 presents the average number of tokens generated to produce a trajectory in MCTS. + +Self-evolution Inference Costs. In the initial bootstrap round, we use DeepSeek-Coder-v2-Instruct (236B) as the policy model, using 10 nodes of $8 \times 80$ GB H100 GPUs with 8 MCTS rollouts. This required approximately two weeks to finish the + +data generation. For rounds 2-4, using our fine-tuned 7B SLM as the policy model, data generation was performed on 15 nodes of $4 \times 40$ GB A100 GPUs, with each round completed in three days. In the final round, to include more challenging problems, we increased the number of MCTS rollouts to 64, extending the data generation time to one week. + +Table 11. Pass@1 accuracy of our fine-tuned policy models for Phi3-mini, Qwen2.5-Math-1.5B, Qwen2-Math-7B and Qwen2.5-Math-7B. + +
ModelMATHAIME 2024AMC 2023Olympiad BenchCollege MathGSM8KGaokaoEn 2023
General Base Model: Phi3-mini-Instruct (3.8B)
Phi3-mini-Instruct41.43.337.512.333.185.737.1
Our policy model68.010.037.536.648.787.953.2
Math-Specialized Base Model: Qwen2.5-Math-1.5B
Qwen2.5-Math-1.5B51.20.022.516.738.474.646.5
Qwen2.5-Math-1.5B-Instruct60.010.060.038.147.784.865.5
Our policy model74.813.347.542.550.183.158.7
Math-Specialized Base Model: Qwen2-Math-7B
Qwen2-Math-7B53.43.325.017.339.480.447.3
Qwen2-Math-7B-Instruct73.213.362.538.245.989.962.1
Our policy model73.816.745.043.952.088.365.2
Math-Specialized Base Model: Qwen2.5-Math-7B
Qwen2.5-Math-7B58.80.022.521.841.691.651.7
Qwen2.5-Math-7B-Instruct82.66.062.541.646.895.266.8
Our policy model78.426.747.547.152.589.765.7
+ +![](images/51cd0e1786ab23cf1fe745e12ac7446ab6ef1cedeab92ee5e352253e6b72e250.jpg) +Figure 6. Pass@N accuracy with random sampling from different policy models. Compared to the official Qwen instruct version, our policy model exhibits a stronger ability to sample correct solutions. + +Pass@N. Table 11 compares the math reasoning performance of our policy models with the instruct versions developed by the original model team. Our policy models do not consistently outperform the instruct versions. For example, on the Qwen2.5-Math-7B base model, Qwen2.5-Math-7B-Instruct achieves $4.2\%$ higher accuracy on the MATH benchmark. However, the pass@1 accuracy alone does not fully reflect the reasoning capabilities for the policy model in System 2 deep thinking paradigm. To provide a more comprehensive evaluation, Fig.6 and Fig.7 present the pass@N accuracy. In this metric, the policy model generates $N$ solutions under two settings: random sampling (Fig.6) and PPM-augmented MCTS deep thinking (Fig.7). A problem is considered solved if one solution is correct. As shown in Figure 6, our policy model performs similarly to Qwen2.5-Math-7B-Instruct on the MATH benchmark's pass@64 and significantly outperforms it on others. This suggests that despite initial pass@1 accuracy differences, our policy model can generate correct solutions through multiple samples. Figure 7 further compares the pass@N accuracy of our four policy models (different sizes) after MCTS deep thinking under the same PPM guidance. We can observe that after generating 64 trajectories, the pass@N accuracy of different policy models becomes comparable. + +Ablation on the MCTS search parameters. We perform additional analysis on the number of candidate nodes per step in MCTS. As shown in Table 12, increasing the number of candidate nodes generally improves accuracy, but beyond 32 nodes, performance saturates. + +![](images/8c88d10f71bae97b4f532cebc3c5b76515fbcfdae3355ba74322e6009c321da7.jpg) +Figure 7. Pass@N accuracy with PPM-augmented MCTS. Under the same PPM guidance, the four policy models of varying sizes demonstrate convergent capabilities in sampling correct solutions. + +Table 12. Ablation study on the number of candidate nodes in MCTS. + +
MATH-500AIME 2024
8 rollouts64 rollouts8 rollouts64 rollouts
node=487.288.833.336.7
node=887.288.833.343.3
node=1688.489.036.750.0
node=3289.489.050.053.3
node=4089.490.046.753.3
+ +# A.3 Examples of Reasoning Trajectories + +# Self-correction Example + +Question: The coordinates of a parallelogram are (5, 3), (6, 8), (7, 4) and $(x,y)$ and $x > 7$ . What is the value of $x + y$ + +# Our trajectory: + +< code > + +# Import necessary libraries + +import numpy as np + + + +Define the coordinates of the parallelogram + +A = np.array([5, 3]) + +B = np.array([6, 8]) + +C = np.array([7, 4]) + + + +Calculate the vector differences to find the missing coordinate D + +# In a parallelogram, the vector difference between two adjacent sides is equal to the vector difference between the other two adjacent sides + +We can use the formula: $\mathrm{D} = \mathrm{A} + (\mathrm{C} - \mathrm{B})$ + +$\mathrm{D} = \mathrm{A} + (\mathrm{C} - \mathrm{B})$ + + + +Since the coordinates provided are (5, 3), (6, 8), (7, 4) and (x, y), we can infer that the vector from B to C should be the same as the vector from A to the missing point (x, y) + +This means: $(\mathrm{x - 5},\mathrm{y - 3}) = (1, - 4)$ because the vector from B to C is $(7 - 6,4 - 8) = (1, - 4)$ + +Solve for x and y + +$\mathrm{x} = 5 + 1$ + +y $= 3 + (-4)$ + + + +Verify that the point $(\mathbf{x},\mathbf{y})$ exists and satisfies the condition $\mathrm{x}$ 7 + +Since $x = 6$ is not greater than 7, we need to use the correct formula that maintains the parallelogram property and $x$ + +# Correcting the formula to find the other possible D [Self-correction!] + +$\mathrm{D} = \mathrm{C} + (\mathrm{B} - \mathrm{A})$ + + + +Solve for $x$ and $y$ again + +$\mathrm{x} = 7 + (6 - 5)$ + +y = 4 + (8 - 3) + + + +Verify that the point $(\mathrm{x},\mathrm{y})$ exists and satisfies the condition $\mathrm{x}\dot{\zeta} 7$ + +Since $x = 8$ is greater than 7, which satisfies the condition given, we can now find the sum of $x$ and $y$ + +x-plus_y=x+y + + + +Now print the final answer + +print(x_plus_y) + +This should print the value of $x + y$ + + + +17 + +The value of $\mathbf{x} + \mathbf{y}$ is $\boxed{17}$ < end_of_answer > + +# Math statement proof using Fermat's little theorem + +Question: If ( pt+a ) is a perfect square for some value of ( t ) , then ( a $\frac{p - 1}{2} - 1$ ) is divisible by ( p ) (where ( p ) is a prime number). + +# Our trajectory for proving: + +$< \text{code}>$ + +Step 1: Define the variables + +from sympy import symbols, isprime, sqrt, modInverse + +Define the variables + +p,t,a $=$ symbols('p t a') + + + +Step 2: Assume $\mathrm{p}^{*}\mathrm{t} + \mathrm{a}$ is a perfect square + +This means there exists an integer $k$ such that $p^* t + a = k^2$ . + +Rearrange to find a in terms of p, t, and k + +a = k² - p* t + + + +# Step 3: Use Fermat's Little Theorem + +Fermat's Little Theorem states that if $p$ is a prime and $a$ is an integer not divisible by $p$ , + +then $\mathrm{a}^{(p - 1)} \equiv 1 (\mathrm{mod} \mathrm{p})$ + +We need to show that $\mathbf{a}^{((p - 1) / 2)} - 1$ is divisible by $p$ . + + + +Step 4: Substitute $\mathrm{a} = \mathrm{k}^2 - \mathrm{p}^*\mathrm{t}$ into Fermat's Little Theorem + +$\mathbf{a}^{((p - 1) / 2)} \equiv (\mathrm{k}^2 - \mathrm{p}^*\mathrm{t})^{((p - 1) / 2)} \pmod{\mathrm{p}}$ + +Since $\mathrm{p}^*\mathrm{t}$ is divisible by $\mathfrak{p}$ , $(\mathrm{k}^2 -\mathrm{p}^*\mathrm{t})^{((p - 1) / 2)}\equiv \mathrm{k}^{(p - 1)}$ (mod p) + +By Fermat's Little Theorem, $\mathrm{k}^{(p - 1)}\equiv 1$ (mod p) + + + +Step 5: Therefore, $\mathfrak{a}^{((p - 1) / 2)} \equiv 1$ (mod p) + +This implies $\mathfrak{a}^{((p - 1) / 2)} - 1 \equiv 0 \, (\mathrm{mod} \, \mathfrak{p})$ . + +Hence, $\mathrm{a}^{((p - 1) / 2)} - 1$ is divisible by $p$ + + + +Now print the final answer + +print("The expression $\mathrm{a}^{((p - 1) / 2)} - 1$ is divisible by p.") + + + + + +The expression $a^{((p - 1) / 2)} - 1$ is divisible by $p$ . + +The statement is true. + + + +# Problem-solving with application of the Vieta's formulas and the AM-GM inequality + +Question: Let $a$ be a positive real number such that all the roots of + +$$ +x ^ {3} + a x ^ {2} + a x + 1 = 0 +$$ + +are real. Find the smallest possible value of $a$ . + +# Our trajectory for proving: + +code + +Step 1: Define the polynomial equation + +from sympy import symbols, solve, Eq + +Define the variable and the parameter + +x, a = symbols('x a') + +< end_of_step >[PPM Score: -0.46430] + +Step 2: Define the polynomial equation + +polynomial = x**3 + a*x**2 + a*x + 1 + +< end-of_step >[PPM Score: 0.53937] + +Step 3: Use Vieta's formulas to relate the coefficients to the roots + +Since the polynomial is monic (leading coefficient is 1), the sum of the roots (taken one at a time) is -a + +The sum of the products of the roots taken two at a time is a + +The product of the roots is -1 + +Since all roots are real, we can denote them as r1, r2, r3 + +From Vieta's formulas: + +$\mathrm{r1 + r2 + r3 = -a}$ + +$\mathrm{r1^{*}r2 + r2^{*}r3 + r3^{*}r1 = a}$ + +r1\*r2\*r3=-1 + +< end_of_step >[PPM Score: 0.9989] + +Step 4: Use the AM-GM inequality to find the minimum value of a + +By AM-GM, $(\mathrm{r1}^{*}\mathrm{r2} + \mathrm{r2}^{*}\mathrm{r3} + \mathrm{r3}^{*}\mathrm{r1}) / 3 > = (\mathrm{r1}^{*}\mathrm{r2}^{*}\mathrm{r3})^{(2 / 3)}$ + +Substituting the known values: + +$\mathrm{a} / 3 > = (-1)^{(2 / 3)}$ + +a/3>=1 + +a $> = 3$ + +$<$ end_of_step $>$ [PPM Score: 0.9999] + +Step 5: Verify if $a = 3$ is achievable + +If $\mathrm{a} = 3$ , then $\mathrm{r1} + \mathrm{r2} + \mathrm{r3} = -3$ and $\mathrm{r1}^* \mathrm{r2} + \mathrm{r2}^* \mathrm{r3} + \mathrm{r3}^* \mathrm{r1} = 3$ + +We need to check if there exist real roots that satisfy these conditions + +Consider the case where $r1 = r2 = r3 = -1$ + +Then $\mathrm{r}1 + \mathrm{r}2 + \mathrm{r}3 = -3$ and $\mathrm{r}1^{*}\mathrm{r}2 + \mathrm{r}2^{*}\mathrm{r}3 + \mathrm{r}3^{*}\mathrm{r}1 = 3$ , and $\mathrm{r}1^{*}\mathrm{r}2^{*}\mathrm{r}3 = -1$ + +Thus, $a = 3$ is achievable with real roots + +< end_of_step >[PPM Score: 1.0] + +Step 6: Print the final answer print(3) + + + +$<\text{output}>3 < \text{end\_of\_output}>$ + +The smallest possible value of a is $\boxed{3}$ . + + + +Problem-solving with application the Shoelace Theorem +Question: In the diagram, $D$ and $E$ are the midpoints of $\overline{AB}$ and $\overline{BC}$ respectively. Determine the area of quadrilateral DBEF. [asy] size(180); defaultpen(lenwidth(.7pt)+fontsize(10pt)); pair A, B, C, D, E, F; A=(0,6); B=(0,0); C=(8,0); D=(0,3); E=(4,0); F=(8/3,2); draw(E-A-C-D); draw(-1,0)-(10,0), EndArrow); draw((0,-1)-(0,8), EndArrow); label("A(0,6)", A, W); label("B(0,0)", B, SW); label("C(8,0)", C, S); label("D", D, W); label("E", E, S); label("F", F, SW); label("x", (10,0), dir(0)); label("y", (0,8), dir(90)); [/asy] +Our trajectory for proving: + $< \mathrm{code}>$ +# Step 1: Define the coordinates of points A, B, C, D, and E +A = (0, 6) +B = (0, 0) +C = (8, 0) +D = (0, 3) # Midpoint of AB +E = (4, 0) # Midpoint of BC + $< \mathrm{end\_of\_step}>$ [PPM Score: -0.76639] +# Step 2: Define the coordinates of point F using the section formula (known to be (8/3, 2)) +F = (8/3, 2) + $< \mathrm{end\_of\_step}>$ [PPM Score: 0.8429] +# Step 3: Use the Shoelace Theorem to find the area of quadrilateral DBEF +# Shoelace Theorem formula: Area = 0.5 * x1y2 + x2y3 + x3y4 + x4y1 - (y1x2 + y2x3 + y3x4 + y4x1) +# Coordinates of D, B, E, F are (0, 3), (0, 0), (4, 0), (8/3, 2) respectively +x1 , y1 = D +x2 , y2 = B +x3 , y3 = E +x4 , y4 = F +area = 0.5 * abs(x1*y2 + x2*y3 + x3*y4 + x4*y1 - (y1*x2 + y2*x3 + y3*x4 + y4*x1)) + $< \mathrm{end\_of\_step}>$ [PPM Score: 0.9972] +# Step 4: Print the area of quadrilateral DBEF +print (area) + $< \mathrm{end\_of\_code}>$ $< output>8.0 < end\_of\_output>$ $< answer>\boxed{8}< end\_of\_answer>$ + +# A.4 Prompt Examples + +MCTS Prompt for DeepSeek-V2-Coder-Instruct (bootstrap round) + +You are a powerful agent with broad math knowledge and great python programming skills. You need to use python interpreter to do accurate calculation on math equations. + +!!! Remember: + +1. Use code solve the problem step by step. The solution should include three parts: $<$ code $>$ , $<$ output $>$ , and $<$ answer $>$ . +2. All calculations should be done in python code. Provide concise reasoning and thinking in the comments of the code. +3. The most related python packages include 'math', 'sympy', 'scipy', and 'numpy'. +4. Please use the following template: + +Question: the input question + +< code >Construct the code step by step. Use < end_of_step > to indicate the end of each step. Ensure your code can execute correctly(excluding < end_of_step >) and print the answer. Avoid undefined variables (NameError), unimported packages, or formatting errors (SyntaxError, TypeError). In the last step of the code, print the final answer and add a comment: Now print the final answer.< end_of_code > + +$<$ output $>$ Execute the code in using the Python interpreter and display the printed results. $<$ end_of_output $>$ + +< answer >The concise answer without verbose context, put your final answer's numerical part (without unit, only focus on the numerical part if it's a choice question) in + +boxed.< end_of_answer > Now! It's your turn. + +Question: {input} + +The following are 2 demonstration examples: + +Question: Terrell usually lifts two 20-pound weights 12 times. If he uses two 15-pound weights instead, how many times must Terrell lift them in order to lift the same total weight? + +$< \text{code}>$ + +Step 1: Calculate the total weight lifted with two 20-pound weights + +total_weight_20 = 2 * 20 * 12 + + + +Step 2: Calculate the weight lifted per repetition with two 15-pound weights + +weight_per_rep_15 = 2 * 15 + + + +Step 3: Calculate the number of repetitions needed to lift the same total weight with two 15-pound weights + +reps_needed = total_weight_20 / weight_per_rep_15 + + + +Now print the final answer + +print(reps_needed) + + + +$<$ output $>16.0 <$ end_of_output $>$ < answer >From the result, we can see that Terrell must lift the 15-pound weights + +boxed16 times to lift the same total weight. + +< end_of_answer >, + +Question: Find the value of $x$ that satisfies $\frac{\sqrt{3x + 5}}{2} = \frac{\sqrt{5}}{2}$ . Express your answer as a common fraction. + + + +from sympy import symbols, Eq, solve, sqrt + +Define the variable x + +$\mathbf{x} =$ symbols('x') + + + +Define the equation + +equation $= \mathrm{Eq}(\mathrm{sqrt}(3^{*}\mathrm{x} + 5) / \mathrm{sqrt}(6^{*}\mathrm{x} + 5),\mathrm{sqrt}(5) / 3)$ + + + +Solve the equation for $x$ + +solution $=$ solve(equation, x) + + + +Now print the final answer + +print解决问题) + + + +$<$ output $>$ [20/3] + +$<$ end_of_output $>$ < answer $>$ From the result, we can see that the value of x is + + + +MCTS Prompt for our policy SLM +```txt +MCTS prompt for next step generation: +<|user|>: +{input question} +<|assistant|>: Let's think step by step and solve the problem with code. {trajectory of previous steps} +SFT Prompt Example: +<|user|>: +How many numbers can you get by multiplying two or more distinct members of the set 1, 2, 3, 5, 11 together? +<|assistant|>: Let's think step by step and solve the problem with code. +# Step 1: Create the set of numbers +numbers = {1, 2, 3, 5, 11} + +# Step 2: Define a function to generate all possible products of two or more distinct members of the set +from itertools import combinations +def generate_products (numbers): + products = set() + for r in range(2, len(numbers) + 1): + # For combinations of 2, 3, 4, and 5 elements + for combo in combinations(numbers, r): + product = 1 + for num in combo: + product *= num + products.add/product) + return products + +# Step3: Calculate the productsize-products (numbers) + +# Step 4: Calculate the number of unique products +num_unique-products = len(unique-products) + +# Now print the final answer +print(num_unique-products) + +15 +From the result, we can see that there are 15 unique products. + +``` \ No newline at end of file diff --git a/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/images.zip b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8142f3b7255d3649613ae1d3513f7aa8d082d2b2 --- /dev/null +++ b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0e6993b1a336fb1434c45973a267c18308db073e52df33a24c981aa3405ef68 +size 983265 diff --git a/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/layout.json b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..28f1abc0525d8d78a014ed6765378f85c6873484 --- /dev/null +++ b/ICML/2025/rStar-Math_ Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ec0d2d0579140e04fdc0e8918e636499bd6fe35a569cabe3b119179989ec0ca +size 797338 diff --git a/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_content_list.json b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..06c7e9bd17369f1a7ea40e2be6d6a370cd50de9d --- /dev/null +++ b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cf6748fc5aac679f7d2fe015da91f6c49b746d26d4ef7a259867b8ba06c6bf6 +size 163851 diff --git a/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_model.json b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..559e4cab7aa537106b1e6890d413d611beec69f9 --- /dev/null +++ b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f92f16ef487a9b914164f93fe8dc830bbf12a94cfd3ec1433b051239b6624b43 +size 191783 diff --git a/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_origin.pdf b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8a51c93841cb82dc66f5e28d9c49c2ad5800dd67 --- /dev/null +++ b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/e3272a7b-9b2d-4675-9e7c-9b951557e63e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b14bf132258ea8f9cdd5d9e243e2e8c94ab3db793a617ab8c7c0cba034ad956 +size 2285118 diff --git a/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/full.md b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e75aa05259c516a6be98d398803afa6337bafe01 --- /dev/null +++ b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/full.md @@ -0,0 +1,443 @@ +# scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell Data + +Olga Ovcharenko $^{*1}$ Florian Barkmann $^{*2}$ Philip Toma $^{*2}$ Imant Daunhawer $^{2}$ Julia E. Vogt $^{2}$ Sebastian Schelter $^{†1}$ Valentina Boeva $^{†234}$ + +# Abstract + +Self-supervised learning (SSL) has proven to be a powerful approach for extracting biologically meaningful representations from single-cell data. To advance our understanding of SSL methods applied to single-cell data, we present scSSL-Bench, a comprehensive benchmark that evaluates nineteen SSL methods. Our evaluation spans nine datasets and focuses on three common downstream tasks: batch correction, cell type annotation, and missing modality prediction. Furthermore, we systematically assess various data augmentation strategies. Our analysis reveals task-specific trade-offs: the specialized single-cell frameworks, scVI, CLAIRE, and the finetuned scGPT excel at uni-modal batch correction, while generic SSL methods, such as VICReg and SimCLR, demonstrate superior performance in cell typing and multi-modal data integration. Random masking emerges as the most effective augmentation technique across all tasks, surpassing domain-specific augmentations. Notably, our results indicate the need for a specialized single-cell multimodal data integration framework. scSSL-Bench provides a standardized evaluation platform and concrete recommendations for applying SSL to single-cell analysis, advancing the convergence of deep learning and single-cell genomics. + +# 1. Introduction + +Recent progress in single-cell RNA sequencing (scRNA-seq) and multi-omics sequencing technologies has transformed our understanding of cellular heterogeneity by en + +*† Equal contribution 1BIFOLD & TU Berlin, Berlin, Germany 2Department of Computer Science, ETH Zurich, Zurich, Switzerland 3Swiss Institute of Bioinformatics, Lausanne, Switzerland 4Paris Cité University, Cochin Institute, INSERM U1016, Paris, France. Correspondence to: Olga Ovcharenko , Sebastian Schelter , Valentina Boeva . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +abling cell molecular profiling at unprecedented resolution (Sikkema et al., 2023; Eraslan et al., 2022). This breakthrough has revolutionized our ability to understand diseases, develop personalized treatments, and trace the origins of complex conditions like cancer and autoimmune disorders. scRNA-seq (Tang et al., 2009) captures gene expression levels in individual cells and generates a high-dimensional matrix where each row represents a cell and each column represents a gene's expression level. Multi-omics approaches simultaneously measure additional molecular features, including chromatin accessibility through ATAC-seq (Grandi et al., 2022) or protein levels via CITE-seq (Stoeckius et al., 2017). Modern multi-omics experiments generate massive datasets encompassing hundreds of thousands of cells, with each cell characterized by diverse measurements: the expression of tens of thousands of genes, the accessibility of hundreds of thousands of chromatin regions, and the abundance of hundreds of surface proteins. Multi-modal profiling provides an unprecedented view of cellular state and function. However, the resulting datasets are susceptible to batch effects — systematic technical variations introduced during sample preparation, sequencing, or processing (Lahnemann et al., 2020). If left uncorrected, batch effects mask genuine biological signals and compromise downstream analyses (Heumos et al., 2023). For instance, when comparing blood samples from cancer patients processed in different laboratories, batch effects can make immune cells from the same patient appear more different from each other than from cells of other patients, masking crucial patterns in how the immune system responds to the tumor (Slyper et al., 2020). + +The success of self-supervised learning (SSL) methods in computer vision (He et al., 2020; Chen et al., 2020), video processing (Schiappa et al., 2023), and natural language processing (Min et al., 2023) has inspired their application to single-cell data. Several models have been adapted for analyzing single-cell data (Han et al., 2022; Liu et al., 2024; Li et al., 2023; Tang et al., 2024), showing promising results in mitigating batch effects and improving downstream analyses. There is an interest in the genomics community in finding standardized approaches for applying SSL to single-cell analysis. A recent work (Richter et al., 2024a) discusses scenarios in which SSL is applicable to single-cell genomics. + +The authors compare the performance of masked autoencoders and two SSL methods (BYOL (Grill et al., 2020) and Barlow Twins (Zbontar et al., 2021)), discuss the effects of pre-training on auxiliary data, and empirically study the efficacy of zero-shot and fine-tuned SSL. However, their work lacks a comparison to specialized single-cell models and does not explore individual hyperparameters and regularization techniques. Furthermore, building upon innovations in natural language processing (Vaswani et al., 2017; Devlin et al., 2019b), single-cell foundation models (Cui et al., 2024; Yang et al., 2022a; Theodoris et al., 2023) have recently emerged as powerful tools to understand cellular heterogeneity and gene-gene interactions, and require a comparison to contrastive methods. + +Since the majority of SSL methods were originally developed for image and text data, there is a lack of a systematic comparison of models, hyperparameters, training regimes, regularization techniques, and augmentations for single-cell genomics data (Toma et al., 2024). This knowledge gap limits our understanding of how to effectively adapt and optimize SSL methods for single-cell data. Our work seeks to fill this gap and focuses on the following research questions: + +- $RQ1$ - Do specialized single-cell SSL methods outperform generic SSL methods? How does the performance of SSL models differ for uni-omics and multi-omics data? +- $RQ2$ - How do hyperparameters and augmentation techniques impact the performance of generic SSL methods for single-cell data? +- $RQ3$ - Are batch normalization and multi-modal integration techniques proposed for image data beneficial for single-cell genomics data as well? + +Our main contribution is an open-source benchmark, scSSL-Bench, which compares the performance of several self-supervised learning methods for single-cell data. (1) To address $RQ1$ , we evaluate nineteen generic and specialized single-cell SSL methods across seven different single-cell uni-modal and two multi-modal datasets, assessing their performance on three common downstream tasks: batch correction, cell type annotation, and missing modality prediction (Subsection 4.1). Our results reveal that specialized frameworks, scVI and CLAIRE, together with the foundation model, scGPT, are the best for uni-modal batch correction, while generic SSL techniques such as VICReg and SimCLR outperform domain-specific methods for multi-modal batch correction and the other two tasks on single-modal data. (2) For $RQ2$ , we evaluate various model architectures and hyperparameters, including representation and projection dimensionality, augmentation strategies, and multi-modal integration methods. (Subsection 4.2). Overall, we find that a moderate to larger embedding dimensionality consistently leads to improved results and identify masking as + +the most beneficial augmentation technique that surpasses biology-specific augmentations. (3) An assessment of design decisions suggested in the related work, e.g., retaining projector and domain-specific batch normalization, helps to answer $RQ3$ and to find best practices that can be adopted by the single-cell genomics community (Subsection 4.3). We find that neither domain-specific batch normalization nor retaining the projector during inference improves results. + +We provide our benchmark code under an open license at https://github.com/BoevaLab/scSSL-Bench for reproducibility and for fostering further research on SSL for single-cell data. + +# 2. Background + +In this Section, we discuss SSL on single-cell data, the corresponding downstream tasks, and specialized methods. + +# 2.1. Machine Learning on Single-Cell Data + +Data: There exist multiple technologies that measure different aspects of the cellular state. scRNA-seq (Tang et al., 2009) measures which genes are expressed in each cell and produces a high-dimensional sparse count matrix representing individual cells as rows and genes as columns. CITE-seq (Stoeckius et al., 2017), $10\mathrm{x}$ multiome (Baysoy et al., 2023), and TEA-seq (Swanson et al., 2020) profile complementary to gene expression aspects such as chromatin accessibility and protein abundance within a cell. + +Downstream Tasks: Learned cell representations are commonly used for multiple downstream tasks. + +Batch correction - Single-cell data can be affected by batch effects, which challenges the ability to measure true biological variation (Yu et al., 2023; Polanski et al., 2019). Batch effects are technical biases introduced while sequencing because of differences in sequencing platforms, timing, reagents, or experimental conditions across laboratories (Zhang et al., 2024). To address the presence of batch effects, a common approach is learning a batch-corrected lower-dimensional embedding, where cells cluster based on their cell type and cell state rather than their experimental batch of origin (Hao et al., 2024) (Figure G2 in the Appendix illustrates cells before and after batch correction). + +Cell type annotation – This task (also called as query-to-reference mapping) revolves around unsupervised transfer learning (Yang et al., 2022b), where the primary objective is to annotate cells of a hold-out dataset (query) by mapping them to a joint latent space of a pre-annotated train dataset (reference), whose cell types are known (Lotfollahi et al., 2022). Once test and train data are aligned, held-out cells are annotated using a classifier trained on embeddings of the reference dataset. Figure G3 in the Appendix visualizes how + +![](images/c980650c4a1dece5c90d3bb44ea9a25bf943abdcfa2e49e7e86df5627d20af71.jpg) +Figure 1. Outline of scSSL-Bench: 1 As input, scSSL-Bench takes scRNA-seq data (cell-by-gene count matrix), where each value in the matrix represents the number of reads in a cell for the corresponding gene. 2 scSSL-Bench trains one of nineteen methods: Generic, specialized contrastive (Bio Contrast), specialized generative (Bio Gen), and baselines. For self-supervised generic methods, scSSL-Bench uses augmentations 3 to create two views of a cell. 4 The learned embeddings are evaluated on three downstream tasks. + +the learned representations of train and hold-out sets and train cell types are used to predict the cell types of hold-out data (query, blue result) during subsequent inference. + +Missing modality prediction - For multi-modal datasets, missing modality prediction enables the inference of unmeasured (missing) modalities in held-out (query) cells (Yang et al., 2022b). Given multi-modal train data (reference) with RNA and protein expressions and hold-out data containing only RNA, the goal is to predicts the hold-out dataset's original protein values by averaging the proteins of the nearest neighbors from the train set (referred to as kNN probing). + +# 2.2. Self-Supervised Learning (SSL) Methods + +SSL aims to discover useful data representations without relying on annotations (Geiping et al., 2023) by leveraging the dis-/similarity of data samples. We refer to Appendix A for details on generic SSL methods (and their contrastive and non-contrastive variants). + +Single-Cell Contrastive Methods: There are several SSL frameworks tailored for single-cell data. CLEAR (Han et al., 2022) employs contrastive SSL and leverages InfoNCE loss (van den Oord et al., 2019). Positive/negative pairs are created by adding Gaussian noise, random masking, or crossing over genes between two cells. CLAIRE (Yan et al., 2023) suggests a novel augmentation strategy by finding mutual nearest neighbors (MNN) between and nearest neighbors (KNN) within experimental batches in a dataset. CLAIRE uses inter-biological-batch MNN pairs as initial positive pair seeds, which are then "mixed" with intra-biological-batch neighbors to generate positive pairs. CLAIRE extends MoCo's (He et al., 2020) architecture with online and momentum encoders. Concerto (Yang et al., 2022b) + +is a contrastive self-supervised distillation framework that uses an asymmetric teacher-student network structure (Hu et al., 2023) and dropout to create two augmented views of a cell (a positive pair). Positive/negative pairs are contrasted using NTXent loss (Sohn, 2016). Concerto also supports single-cell multi-modal data, e.g., pairs of RNA and protein. The scCLIP (Xiong et al., 2023) method is a generalized multi-modal transformer model that applies contrastive learning to single-cell multi-omics data, which adopts ideas from CLIP (Radford et al., 2021) by defining modality-specific encoders, constructing positive pairs from two modalities of the same cell, and contrasting them using InfoNCE loss (van den Oord et al., 2019). + +Single-Cell Generative Methods: Generative approaches, from variational autoencoders (VAE) to transformer-based foundation models, are used to learn single-cell representations and exploit the biological batch/cell type annotations during training, which can be seen as leaking information compared to contrastive SSL methods. The state-of-the-art single-cell method scVI (Lopez et al., 2018) is a widely-used VAE that leverages a zero-inflated negative binomial distribution as reconstruction loss. A multi-modal version of scVI, totalVI (Gayoso et al., 2021), allows joint analysis of RNA and protein expressions. SCDC (Li et al., 2024) is a uni-modal method that employs biological batch and cell type encoders to create a concatenated representation that is reconstructed using the decoder. To improve the discrimination of the batch information, SCDC uses a specialized batch discriminator. scTEL (Chen et al., 2025) leverages transformer and LSTM layers to establish a mapping from RNA expression to unobserved protein expression in the same cells. scButterfly (Cao et al., 2024b) supports CITE-seq (Stoeckius et al., 2017) data and employs a dual-VAE architecture with modality-specific pretraining and a dis + +criminalitor to encourage the mixing of different modalities. Recently, transformer-based single-cell foundation models (scFMs) have emerged. For generalizability, scFMs are pretrained on tens of millions of cells. scBERT (Yang et al., 2022a) adapts BERT's (Devlin et al., 2019a) masked language modeling to learn contextual gene representations, scGPT (Cui et al., 2024) uses GPT-style pretraining to create transferable representations across cell types and experimental conditions, and Geneformer (Theodoris et al., 2023) employs a transformer architecture pre-trained on large-scale gene expression datasets to capture gene-gene interactions. + +# 3. Benchmark Design + +Figure 1 illustrates our design of scSSL-Bench. The input of the benchmark are cell-by-gene count matrices containing scRNA-seq or CITE-seq data ①. Depending on the data and SSL method, scSSL-Bench trains one of nineteen representation learning frameworks ② using augmentations to create positive/negative pairs ③ for self-supervised approaches. Finally, the learned representations are evaluated on three downstream tasks ④. + +# 3.1. Datasets, Models, and Tasks + +Datasets: We consider nine single-cell genomics datasets that represent common established benchmarks (Richter et al., 2024a). Peripheral Blood Mononuclear Cells (PBMC), Pancreas, Immune Cell Atlas, Mouse Cell Atlas (MCA), Human Immune Cells (HIC), Lung, and Tabula Sapiens are seven single-modal datasets collected using scRNA-seq (Tang et al., 2009) technology. Multi-modal Peripheral Blood Mononuclear Cells (PBMC-M) and Multi-modal Bone Marrow Mononuclear Cells (BMMC) are multi-modal datasets collected using CITE-seq technology that contain RNA and protein or gene expression and protein abundance (ADT) respectively. Further details in Appendix B. + +SSL Methods: To investigate $RQ1$ , we benchmark nineteen existing SSL methods and divide them into four categories: generic, domain-specific specialized contrastive and generative methods, and baselines. SimCLR (Chen et al., 2020), MoCo (He et al., 2020), SimSiam (Chen & He, 2020), NNCLR (Dwibedi et al., 2021), BYOL (Grill et al., 2020), VICReg (Bardes et al., 2022), and BarlowTwins (Zbontar et al., 2021) are generic SSL architectures that we adopt to single-cell (multi-omics) data (see Figure G1 for architecture details). Contrastive domain-specific methods that are tailored for the single-cell data include Concerto (Yang et al., 2022b), CLEAR (Han et al., 2022), CLAIRE (Yan et al., 2023), and scCLIP (Xiong et al., 2023). Generative methods include commonly used for single-cell data integration, scVI (Lopez et al., 2018) and totalVI (Gayoso et al., 2021), which are single-cell specialized variational autoencoder-based methods, and single-cell foundation + +models scGPT (Cui et al., 2024), Geneformer (Theodoris et al., 2023), and scBERT (Yang et al., 2022a). Additionally, for multi-omics integration, we evaluate scButterfly (Cao et al., 2024b) and scTEL (Chen et al., 2025), which leverage variational autoencoders. SCDC (Li et al., 2024) and principal component analysis (PCA) (Pearson, 1901) are used as baselines in scSSL-Bench. We include PCA as a baseline to assess whether more complex SSL methods offer substantial improvements over a simple linear dimensionality reduction technique that does not correct for batch effects. We refer to Subsection 2.2 for detailed descriptions of each method. + +First, in all contrastive methods except Concerto, two views are created by augmenting a single sample. Second, both views are encoded by a network with shared weights, producing data representations. Concerto removes the necessity for transforming samples by placing a dropout layer behind the encoder backbone. Finally, while training, all representations produced by the encoder are passed into a projector to improve robustness (Xue et al., 2024). In all contrastive approaches but Concerto and scCLIP, the projector is discarded during inference, keeping only the encoder's output. + +Downstream Tasks and Evaluation: To address $RQ1$ , our benchmark evaluates multiple single-cell datasets on three tasks: batch correction, cell type annotation, and modality prediction (see Subsection 2.1 for details). + +Batch correction - the quality of batch-corrected embeddings is measured by biological conservation and batch correction metrics. These metrics were introduced in single-cell integration benchmarking (scIB) (Böttner et al., 2019; Luecken et al., 2022; Tran et al., 2020), a tool that is widely used in the single-cell community, see Appendix C for details. Analogous to Luecken et al. 2022, we combine bio conservation (Bio), measuring the similarity between cell embeddings and ground-truth cell types or states, and batch correction (Batch), measuring how well the batch effect is removed, by aggregating these scores into a total score by Total $= 0.6 \times \text{Bio} + 0.4 \times \text{Batch}$ . All tables showing batch correction results are min-max scaled inside each dataset. + +Cell type annotation - each dataset is divided into train (reference) and test (query) data that consists of up to three held-out (experimental) batches with unseen cells (details in Appendix C). We train a k-nearest neighbors (KNN) classifier with train (reference) embeddings and cell types to annotate test (query) data representation. Next, k-nearest neighbor probing (Marks et al., 2025) is used to predict cell types, and performance is evaluated using the macro-average F1-score and classification accuracy (Heryanto et al., 2024). + +Missing modality prediction on multi-modal datasets – we evaluate the quality of the inferred modality by measuring the Pearson correlation between the original and predicted values, see Appendix C for more details. + +# 3.2. Augmentation, Batch Normalization, and Multi-Modal Integration + +Augmentations: We evaluate augmentations for single-cell data proposed in CLEAR (Han et al., 2022) and CLAIRE (Yan et al., 2023) to investigate $RQ2$ . The purpose of augmentations in contrastive SSL is to transform the original sample into two distinct views that are contrasted during training (Zhang & Ma, 2022). Multiple augmentations can be applied to a data sample to improve the generalization and robustness of representations. The authors of CLEAR (Han et al., 2022) introduce four augmentations for scRNA-seq data, each of which we apply with $50\%$ probability: Masking, Gaussian noise, InnerSwap, and CrossOver. First, a random mask sets $20\%$ of a cell's genes to zero, followed by additive Gaussian noise (with mean 0 and standard deviation 0.2) to $80\%$ of genes in the cell. Then, $10\%$ of genes are swapped within the cell (InnerSwap), before mutating $25\%$ of gene expression values with another random cell (CrossOver). CLAIRE uses a neighborhood-based approach: mutual nearest neighbors (MNN) in the unintegrated space are computed for each cell across all batches. During augmentation, an inter- and an intra-batch views are computed by mutating between neighboring cells (Yan et al., 2023). We also evaluate sampling positive pairs from a batch-balanced KNN (BBKNN) graph. We investigate the impact of the MNN and BBKNN augmentations on the batch correction performance. + +Domain-Specific Batch Normalization (DSBN): Concerto (Yang et al., 2022b) adapts the idea of DSBN (Chang et al., 2019), a technique originally suggested for image data. DSBN helps to learn domain-specific information to produce domain-invariant representations by applying separate batch normalization layers for each domain. To investigate $RQ3$ , we replace the common batch normalization with DSBN where each experimental batch (different laboratory experiment in the same dataset) gets its own batch normalization layer, similar to Concerto. + +Multi-Modal Integration: For the multi-omics datasets PBMC-M and BMMC, we evaluate three integration methods as part of $RQ3$ . First, addition takes two embeddings of the same dimensionality (one per modality) and adds them together to get a joint representation, similar to the Concerto (Yang et al., 2022b). Second, concatenation appends two embeddings. Third, instead of contrasting joint views of a cell, two modalities of the same cell are contrasted using a symmetric cross-entropy loss (Wang et al., 2019) and the CLIP approach. After training with the CLIP approach (Radford et al., 2021; Xiong et al., 2023), we concatenate two embeddings during inference. + +# 4. Experiments + +As detailed in Section 3, we benchmark nineteen SSL methods on nine single-cell datasets derived from different tissues with considerable variation in data size and complexity. See Appendix B for details about the datasets. All models are trained with five unique random seeds and we report their mean performance and standard deviation. + +# 4.1. Generic versus Specialized SSL Methods + +$RQ1$ focuses on the comparison of specialized single-cell SSL frameworks and generic SSL methods. For that, we evaluate several models and two baselines on three important downstream tasks for uni- and multi-omics datasets. + +Batch Correction: The batch correction performance of all methods across five datasets is presented in Table 1. Our analysis includes two multi-modal (CITE-seq) datasets (PBMC-M and BMMC) and three single-modality (scRNA-seq) datasets (PBMC, Pancreas, and Immune Cell Atlas). + +For scRNA-seq datasets, our results show that scVI is the best-performing method that balances both batch correction and bio conservation (Table 1). scVI performance drops for the MCA and Lung datasets (Table H1) but excels for the Tabula Sapiens dataset (Table H11). CLAIRE ranks second-best overall but tends to overcorrect batch effects at the expense of biological variance. Overall and comparing to other single-cell generative models, finetuned scGPT performs well for the second-largest evaluated dataset, Immune Cell Atlas, by scoring high in bio conservation and total, but for smaller datasets batch score is significantly lower than other methods. Zero-shot scGPT and finetuned Geneformer show unsatisfactory performance. Across all common benchmarked SSL methods, VICReg, SimCLR, and MoCo perform satisfactorily for the Pancreas dataset. However, for the PBMC and Immune Cell Atlas datasets, these SSL methods prioritize batch correction over bio conservation as indicated by their high batch and low bio score. In comparison to the other methods, in all cases, Concerto significantly underperforms and achieves a total score lower than the baselines, PCA and SCDC. As expected, PCA shows an adequate bio conservation score since it uses raw data and captures the true biological signal. + +For multi-modal datasets, PBMC-M and BMMC, we observe that generic methods such as SimCLR, BYOL, MoCo, and VICReg are the best-performing methods, within their category and overall (Table 1). Interestingly, MoCo overcorrechts for all single- and multi-modal datasets. The results show that there is room for improvement in specialized methods. Concerto, scCLIP, and scButterfly reach low batch correction results compared to general methods as MoCo or SimCLR. For PBMC-M, Concerto preserves biological variance and shows a high bio score. scTEL succeeds at + +Table 1. Batch integration performance across five datasets. We show each method's biological conservation score (Bio), batch correction score (Batch), and total score (Total), with values computed across five runs with different random seeds. We group the methods by category (generic SSL, single-cell contrastive SSL frameworks, generative methods, and baselines). For uni-modal data (PBMC, Pancreas, and Immune Cell Atlas), the specialized encoder-decoder method scVI, the domain-specific SSL method CLAIRE, and a foundation model scGPT outperform other methods. For the multi-modal datasets PBMC-M and BMMC, generic methods achieve higher scores. + +
MethodPBMC-MBMMCPBMCPancreasImmune Cell Atlas
BioBatchTotalBioBatchTotalBioBatchTotalBioBatchTotalBioBatchTotal
SimCLR0.877 ± 0.0200.434 ± 0.0010.700 ± 0.0120.877 ± 0.0250.601 ± 0.0020.767 ± 0.0160.370 ± 0.0020.563 ± 0.0050.447 ± 0.0030.791 ± 0.0020.615 ± 0.0190.721 ± 0.0090.555 ± 0.0200.753 ± 0.0160.635 ± 0.017
MoCo0.786 ± 0.0050.581 ± 0.0160.704 ± 0.0030.647 ± 0.0480.819 ± 0.0240.716 ± 0.0380.336 ± 0.0070.594 ± 0.0140.439 ± 0.0100.754 ± 0.0080.638 ± 0.0170.707 ± 0.0110.404 ± 0.0240.882 ± 0.0220.595 ± 0.016
SimSiam0.903 ± 0.0570.455 ± 0.0290.724 ± 0.0460.753 ± 0.0070.571 ± 0.0020.680 ± 0.0050.271 ± 0.0160.512 ± 0.0020.368 ± 0.0110.531 ± 0.1130.635 ± 0.0170.572 ± 0.0610.358 ± 0.0400.640 ± 0.0200.470 ± 0.030
NNCLR0.877 ± 0.0330.534 ± 0.0040.740 ± 0.0180.819 ± 0.0210.580 ± 0.0080.723 ± 0.0160.345 ± 0.0110.544 ± 0.0090.424 ± 0.0100.701 ± 0.0520.579 ± 0.0150.652 ± 0.0370.430 ± 0.0280.665 ± 0.0070.524 ± 0.017
BYOL0.928 ± 0.0650.493 ± 0.0160.754 ± 0.0330.742 ± 0.0430.693 ± 0.0160.722 ± 0.0190.134 ± 0.0170.748 ± 0.0760.379 ± 0.0200.578 ± 0.0290.659 ± 0.0120.610 ± 0.0130.222 ± 0.0310.864 ± 0.0090.479 ± 0.021
VICReg0.814 ± 0.0390.405 ± 0.0260.651 ± 0.0130.832 ± 0.0510.656 ± 0.0090.761 ± 0.0270.412 ± 0.0100.607 ± 0.0000.490 ± 0.0060.811 ± 0.0030.617 ± 0.0010.733 ± 0.0020.529 ± 0.0140.816 ± 0.0220.644 ± 0.012
Barlow Twins0.902 ± 0.0480.430 ± 0.0140.713 ± 0.0340.859 ± 0.0180.612 ± 0.0110.760 ± 0.0060.341 ± 0.0100.523 ± 0.0050.414 ± 0.0040.694 ± 0.0110.580 ± 0.0100.648 ± 0.0340.535 ± 0.0120.734 ± 0.0200.614 ± 0.01
Concerto0.785 ± 0.0020.422 ± 0.0030.64 ± 0.0020.524 ± 0.0190.661 ± 0.0080.579 ± 0.0150.055 ± 0.0000.566 ± 0.0000.260 ± 0.0000.102 ± 0.0030.367 ± 0.0000.208 ± 0.0020.426 ± 0.0140.810 ± 0.0250.580 ± 0.017
CLEAR0.580 ± 0.0000.209 ± 0.0020.432 ± 0.0010.698 ± 0.0110.249 ± 0.0020.518 ± 0.0060.775 ± 0.0370.327 ± 0.0080.596 ± 0.022
CLAIRE0.714 ± 0.0090.866 ± 0.0050.774 ± 0.0080.582 ± 0.0030.959 ± 0.0140.732 ± 0.0040.548 ± 0.0180.527 ± 0.0110.539 ± 0.011
scCLIP0.643 ± 0.0020.402 ± 0.0040.546 ± 0.0010.638 ± 0.0060.194 ± 0.0050.460 ± 0.005
scGPT (zero-shot)0.440 ± 0.0100.469 ± 0.0170.451 ± 0.0130.473 ± 0.0010.168 ± 0.0030.351 ± 0.0020.380 ± 0.0120.516 ± 0.0140.435 ± 0.008
scGPT (finetuned)0.940 ± 0.0120.514 ± 0.0110.770 ± 0.0110.873 ± 0.0030.345 ± 0.0470.662 ± 0.0210.979 ± 0.0170.485 ± 0.0190.781 ± 0.017
Geneformer (finetuned)0.024 ± 0.0000.462 ± 0.0000.199 ± 0.0000.004 ± 0.0000.437 ± 0.0000.177 ± 0.0000.013 ± 0.0000.265 ± 0.0000.114 ± 0.000
scButterfly0.702 ± 0.0020.391 ± 0.0030.577 ± 0.0020.781 ± 0.0000.297 ± 0.0040.587 ± 0.002
scTEL0.089 ± 0.0020.800 ± 0.0040.373 ± 0.0010.000 ± 0.0060.706 ± 0.0050.282 ± 0.005
totalVI / scVI0.702 ± 0.0020.305 ± 0.0020.543 ± 0.0010.755 ± 0.0020.272 ± 0.0020.562 ± 0.0020.918 ± 0.0150.871 ± 0.0090.899 ± 0.0020.805 ± 0.0020.511 ± 0.0070.688 ± 0.0010.862 ± 0.0330.593 ± 0.0130.754 ± 0.024
SCDC0.679 ± 0.0500.605 ± 0.0050.649 ± 0.0280.648 ± 0.0180.350 ± 0.0230.529 ± 0.0020.698 ± 0.0240.565 ± 0.0190.645 ± 0.020
PCA0.448 ± 0.0000.369 ± 0.0010.417 ± 0.0000.538 ± 0.0020.282 ± 0.0070.436 ± 0.0020.558 ± 0.0030.303 ± 0.0000.456 ± 0.0020.683 ± 0.0010.266 ± 0.0000.516 ± 0.0010.677 ± 0.0000.276 ± 0.0000.517 ± 0.000
+ +batch correction while failing in bio conservation and scoring almost zero for both multi-modal datasets. Despite the success of scVI for uni-modal data, totalVI's results are unsatisfactory compared to the generic contrastive methods. For BMMC dataset, totalVI total score (0.562) is almost twice lower than then score of best method SimCLR (0.767). + +Cell Type Annotation: We assess the cell-typing performance on the single-modal scRNA-seq datasets and CLEAR augmentations for the hold-out batch of the Immune Cell Atlas dataset (Figure 2) and all study data (Table H3). For all experiments, we do not use a projector during inference. Table H3 evaluates cell-typing for the Pancreas dataset + +![](images/0ff925b21e5c29c334fca76800322fc8dd9c4df304555c38667548c6b27efb19.jpg) + +![](images/e94d249090c9c766b45e781cf1ec2671d809398befd34bfe60ca4e62956f0e97.jpg) +Figure 2. Uni-modal cell-typing with one sequencing technology (10X $5^{\prime}$ v2) of the Immune Cell Atlas as a hold-out set. We train the encoder and classifier. The finetuned scGPT and Geneformer perform the best, while the generic VICReg method is a close third. The methods are grouped by category (basiines, specialized generative, specialized contrastive, and generic). + +where unique batches were used as hold-out data. The best-performing methods are VICReg, CLEAR, and in rare cases finetuned single-cell foundation models (scFMs). All generic SSL methods perform well, together with scVI, which takes additional information as input. Although finetuned scFMs achieve adequate accuracy in most cases, they perform the best only for two datasets, the Immune Cell Atlas and the Xin study from the Pancreas, which are larger than the other three evaluated datasets. CLAIRE shows competitive results for the Pancreas dataset but falters in the second-biggest benchmarked dataset Immune Cell Atlas. Additionally, CLAIRE and scFMs have a significantly higher computational load than the other methods. PCA and zero-shot scGPT perform unsatisfactorily. + +Table H4 shows cell-typing performance for multi-modal embeddings, where we integrate modalities through concatenation and train with CLEAR augmentations. We evaluate models using either both modalities (e.g., RNA and protein) or just the main modality (e.g., RNA) to assess whether representations capture information about the second modality and if a single main modality is sufficient during inference when the second modality is unavailable. scButterfly slightly outperforms VICReg and SimCLR, that show competitive results, with scButterfly leading overall performance (Table H4). All generic contrastive methods achieve good accuracy and outperform specialized contrastive methods like Concerto and scCLIP. While totalVI struggles with batch correction, it performs well in cell-typing. All models except scCLIP show better performance with multi-omics data than single modality, though the performance drop for single-modality inference is minimal. Notably, scCLIP appears to treat the second modality as noise. We conclude that scButterfly and generic contrastive models can be used for a single modality inference if the second modality is missing. + +Of note, Concerto and totalVI do not support uni-modal inference for multi-modal data. + +Missing Modality Prediction: Figure 3 shows the ability to predict missing protein values while given only RNA or gene expression (GEX) during inference. The model is trained on multi-omics data using CLEAR augmentations and concatenation to combine modalities. The standard deviation is close to zero, see Table H5. VICReg and SimCLR outperform other methods, including specialized single-cell frameworks. We assume that Concerto, scCLIP, and scTEL do not learn enough information about the secondary modality (protein) and its connection to the main modality (RNA) and, therefore, are not able to predict the missing modality. We evaluate scButterfly in two modes: averaging kNN, as done for other methods, and generating proteins directly from gene expression data. While the performance differences between the two approaches are insignificant, the scButterfly's Pearson correlation is unsatisfactory low compared to generic contrastive methods. High values of Pearson's correlations show that models effectively infer protein values from gene expression data. + +Summary and Findings: Overall, specialized SSL methods designed for single-cell analysis have not demonstrated clear advantages over general-purpose approaches, except for scVI, finetuned scGPT, and CLAIRE. We attribute the superior performance of scVI, finetuned scGPT, and CLAIRE to the fact that these methods leverage experimental batch information during training. For multi-omics data, the best models are SimCLR and VICReg. Our findings indicate that current single-cell SSL methods such as scCLIP or totalVI need improvement for multi-modal downstream tasks, as they do not yet surpass or compete with generic architectures in performance. + +![](images/7b2c45d7f63c77cc0d4c7bf7c55ea9a2752d04d3209c1b70be1ab7f14ba5f2f9.jpg) +Figure 3. Missing modality prediction for models trained on the multi-modal datasets, PBMC and BMMC. We show the average Pearson correlation between the original and inferred missing modality: protein for PBMC-M and ADT (protein abundance) for BMMC. The methods are sorted from worst (left) to best (right) within group (specialized contrastive, generative, and generic). + +# 4.2. Ablation Study + +$RQ2$ investigates how hyperparameters and augmentations impact the performance of single-cell SSL. We conduct hyperparameter tuning for all generic methods using two datasets: HIC and MCA. For the mentioned frameworks, we use the hyperparameters proposed in the respective original papers. For the generic methods, we focus on augmentations, the representation dimensionality, the projection dimensionality, and the temperature parameter. + +Representation Dimensionality: We perform a grid search over the representation dimensionality for the HIC and MCA datasets, evaluating the batch correction performance (details in Appendix D). We train all models with embedding dimensions \{8, 16, 32, 64, 128, 256, 512, 1024\}. Models are ranked according to the SCIB (Luecken et al., 2022) total score, which is min-max scaled across all models. Our findings indicate that lower dimensionalities of 64 and 128 consistently lead to the best performance across all considered methods, while the larger dimensionality of 1024 achieves similar but requires more training time and memory (Figure G4 in Appendix). Given these observations, we adopt the embedding size of 64 for subsequent experiments. + +Projector Dimensionality: To learn more robust representations, self-supervised models may benefit from projection heads (Xue et al., 2024). We investigate the impact of projection dimensionality during training by introducing a scale factor. At inference time, the projection head is discarded, and only the encoder is used. For contrastive methods, the projection size is scaled down by this factor, while for non-contrastive methods, it is scaled up by the same factor (see Appendix D). Our results reveal that the projector effect is ambiguous for most models (Figure G5). However, BarlowTwins, BYOL, and VICReg show an improved performance with larger scaling factors. + +Temperature Impact: Figure 4 shows the temperature $t^{\circ}$ effect for SimCLR, MoCo, and NNCLR models. We evaluate $t^{\circ} \in \{0.1, 0.5, 1, 5, 10\}$ using SCIB-METRICS scores. Overall, lower $t^{\circ}$ values lead to better scores. The PyTorch default value ( $t^{\circ} = 0.5$ ) performs well across all models and datasets (used for subsequent experiments). For scRNA-seq datasets, MCA and HIC, batch correction metric decreases with increasing $t^{\circ}$ , except for NNCLR model and HIC dataset. For HIC data, higher $t^{\circ}$ leads to uncorrected batch effects, while for MCA data, it results in clustering of similar cell types that are expected to be near but not fully mixed together (e.g., CD4 T and CD8 T cells). For multi-modal datasets, PBMC-M and BMMC, $t^{\circ}$ effects are inconsistent. For PBMC-M, MoCo produces extremely similar results with all evaluated $t^{\circ}$ , while SimCLR and NNCLR mix batches better with higher $t^{\circ}$ , e.g., with a $t^{\circ}$ of 10 cell types are separated with mixed batches. For BMMC, lower $t^{\circ}$ achieves better batch correction. + +![](images/6f9515025029710f716c8d231b283efa66a215a92da5253935f75cb91e430078.jpg) +Figure 4. Temperature impact on the loss of three contrastive methods on four datasets (columns). Bio conservation, batch correction, and total scores are represented on the y-axis. The results are not min-max scaled for easier comparison. Overall, smaller temperature leads to better data integration. + +![](images/4ee037866ed235a6385bf550a96b01b6156adcbdbbd63650a47415d53cef14f3.jpg) +Figure 5. Evaluation of individual and combined data augmentations for the VICReg method based on total score for batch correction. Diagonal entries correspond to a single augmentation, and off-diagonal entries correspond to the two sequentially applied augmentations. Hyperparameters are based on ablation results (Table H10), evaluation for SimCLR and MoCo are in Figure G7. + +Augmentation Ablation: The space of augmentations in the single-cell domain can be split into: random transformations (Han et al., 2022; Richter et al., 2024b) and neighborhood-based transformations (Yan et al., 2023; Liu et al., 2024). We perform an ablation for all studied augmentations and optimize hyperparameters for each (see Appendix E). To study how augmentations affect each other, we train VICReg, SimCLR, and MoCo models with combinations of two augmentations. We choose these models due to their consistently good performance. Random masking is the best-performing augmentation alone and combined with others (Figure 5). Additionally, CrossOver performs competitively, especially for the SimCLR model (Figure G7). + +Summary and Findings: Higher-dimensional embeddings and lower temperatures enhance consistency and performance. Larger representations have better bio conservation, + +smaller - batch correction. However, the optimal embedding size is 64 or 128. Among various data augmentation techniques, masking proves most effective and surpasses even sophisticated biology-specific approaches that incorporate batch information, such as MNN and BBKNN. + +# 4.3. Impact of Batch Normalization and Multi-Modal Integration Proposed for Image Data + +In the following, we address $RQ3$ and study the impact of common SSL techniques, such as retaining the projector and domain-specific batch normalization (Chang et al., 2019). + +Retaining Projector: To evaluate the impact of the projector during inference, we train a model consisting of an encoder and projector with CLEAR augmentations, and evaluate it with/without projection. Although it is common to only use the encoder during inference to create an embedding (Chen et al., 2020; Chen & He, 2020), the projector is also leveraged in the single-cell community and we compare the two approaches. First, we analyze how retaining the projection layer affects batch integration for single-modal data (Table H6). While using the projection layer slightly improves batch correction, it generally reduces biological conservation and total (which weights bio conservation more) scores. For example, the total score decreases from 0.625 to 0.608 for SimCLR on the MCA dataset. These findings suggest using only the encoder during inference, rather than the combined encoder and projector. The former better preserves biological signals despite slightly worse batch correction performance. Second, we observe a similar trend on multi-modal datasets as on scRNA-seq data (Table H7). The effects are less consistent and conclusive (scores change among models/datasets), and only MoCo's batch correction benefits from projection. Remarkably, Concerto uses encoder and projector during inference but has comparably unsatisfactory batch correction performance. + +Domain-Specific Batch Normalization (DSBN): Inspired by Concerto (Yang et al., 2022b) and common practices from computer vision applications, we evaluate whether models benefit from DSBN (Chang et al., 2019). Although the Concerto (Yang et al., 2022b) manuscript discusses the usage of DSBN, the publicly available code does not apply DSBN. Therefore, we evaluate DSBN only for generic methods. Table H8 shows reduced total performance when leveraging DSBN compared to standard batch normalization. For HIC dataset, DSBN leads to slightly better batch correction but worse bio conservation. However, it is not the case for the MCA dataset. + +Multi-Modal Integration Methods: In Table H9, we compare three ways to combine multiple modalities of a cell: element-wise addition of uni-modal embeddings (Yang et al., 2022b), concatenation of uni-modal embeddings, and multi-modal contrastive learning with the CLIP objective (Radford + +et al., 2021; Xiong et al., 2023). For each modality, we train a model with CLEAR (Han et al., 2022) augmentationsand discard the projector during inference. See Appendix F for details. Table H9 shows that concatenation is the best integration method. Addition and concatenation show high results in bio conservation, while the CLIP-based approach performs better in batch correction. + +Summary and Findings: Previously recommended techniques, such as keeping the projector or using DSBN, fail to enhance performance. For combining multiple modalities, concatenation turns out to be the most effective approach. + +# 5. Conclusions + +We introduced a comprehensive benchmark, scSSL-Bench, for self-supervised learning on uni- and multi-modal single-cell data. First, we observe that specialized single-cell SSL methods perform better than generic methods for uni-modal data and underperform for multi-modal data $(RQ1)$ . The best scRNA-seq single-modal data integration methods are scVI, CLAIRE, and the finetuned scGPT, all specialized for single-cell data. Generative scVI and the single-cell foundation model scGPT significantly outperform all SSL methods, while CLAIRE shows good scores only for a subset of the datasets. For multi-omics data, the generic methods SimCLR and VICReg perform the best and even outperform all other methods in the cell type annotation and missing modality prediction tasks for single-modal data. According to our findings, there is a need for improving existing and developing new multi-modal specialized SSL methods since current existing frameworks do not outperform generic architectures, and multi-modal integration turned out to be a more difficult task than uni-modal $(RQ1)$ . Second, we conclude that masking augmentation leads to the biggest improvements alone and in combination with other types of augmentations, and moderately-size to large embedding sizes lead to better results $(RQ2)$ . Third, retaining the projection head or applying domain-specific batch normalization do not significantly influence the scores and rather degrade the total data integration score by achieving a lower bio conservation and higher batch correction indicating primarily regularization potential $(RQ3)$ . Finally, our benchmark offers a standardized framework for assessing new SSL methods, enabling researchers to systematically evaluate and compare their approaches against established baselines. + +# Acknowledgements + +The authors thank Sebastian Baunsgaard, Dmitry Kobak, and Thomas Sutter for their insightful comments and constructive feedback on the manuscript. We thank the three anonymous reviewers for their thorough reviews, which significantly improved the quality of this paper. + +Computational data analysis was performed at Leonhard Med secure trusted research environment at ETH Zurich and at the BIFOLD Hydra cluster. + +FB is supported by the Swiss National Science Foundation (SNSF) (grant number 205321-207931). + +# Impact Statement + +This work provides a systematic benchmark of self-supervised learning (SSL) methods in single-cell genomics; we evaluate nineteen approaches across batch correction, cell typing, and missing modality prediction tasks. Our results offer practical guidelines for applying SSL to biological data, advancing computational tools for single-cell analysis. The implications span from biomedical research to personalized medicine, where improved data integration enables better diagnostic and therapeutic strategies. By standardizing evaluation protocols, this benchmark promotes reproducibility and cross-disciplinary collaboration. + +# References + +Bardes, A., Ponce, J., and LeCun, Y. VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning. In ICLR, 2022. +Baysoy, A., Bai, Z., Satija, R., and Fan, R. The technological landscape and applications of single-cell multi-omics. Nature Reviews Molecular Cell Biology, 24(10):695-713, Oct 2023. ISSN 1471-0080. doi: 10.1038/s41580-023-00615-w. URL https://doi.org/10.1038/s41580-023-00615-w. +Büttner, M., Miao, Z., Wolf, F. A., Teichmann, S. A., and Theis, F. J. A test metric for assessing single-cell RNA-seq batch correction. Nature Methods, 16(1):43-49, Jan 2019. ISSN 1548-7105. doi: 10.1038/s41592-018-0254-1. URL https://doi.org/10.1038/s41592-018-0254-1. +Cao, J., Nai, R., Yang, Q., Huang, J., and Gao, Y. An empirical study on disentanglement of negative-free contrastive learning. In NeurIPS, 2024a. ISBN 9781713871088. +Cao, Y., Zhao, X., Tang, S., Jiang, Q., Li, S., Li, S., and Chen, S. scButterfly: a versatile single-cell cross-modality translation method via dual-aligned variational autoencoders. Nature Communications, 15(1):2973, Apr 2024b. ISSN 2041-1723. doi: 10.1038/s41467-024-47418-x. URL https://doi.org/10.1038/s41467-024-47418-x. +Chang, W.-G., You, T., Seo, S., Kwak, S., and Han, B. Domain-specific batch normalization for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pp. 7354-7362, 2019. + +Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. In ICML, pp. 1597-1607, 2020. +Chen, X. and He, K. Exploring Simple Siamese Representation Learning, 2020. URL https://arxiv.org/abs/2011.10566. +Chen, Y., Fan, X., Shi, C., Shi, Z., and Wang, C. A joint analysis of single cell transcriptomics and proteomics using transformer. npj Systems Biology and Applications, 11(1):1, Jan 2025. ISSN 2056-7189. doi: 10.1038/s41540-024-00484-9. URL https://doi.org/10.1038/s41540-024-00484-9. +Conde, C. D., Xu, C., Jarvis, L. B., Rainbow, D. B., Wells, S. B., et al. Cross-tissue immune cell analysis reveals tissue-specific features in humans. Science, 376(6594): eabl5197, 2022. doi: 10.1126/science.abl5197. URL https://www.science.org/doi/abs/10.1126/science.abl5197. +Cui, H., Wang, C., Maan, H., Pang, K., Luo, F., Duan, N., and Wang, B. scGPT: toward building a foundation model for single-cell multi-omics using generative AI. Nature Methods, 21(8):1470–1480, 2024. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J., Doran, C., and Solorio, T. (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019a. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423/. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 4171-4186, 2019b. +Ding, J., Adiconis, X., Simmons, S. K., Kowalczyk, M. S., Hession, C. C., et al. Systematic comparison of single-cell and single-nucleus rna-sequencing methods. Nature Biotechnology, 38(6):737-746, Jun 2020. ISSN 1546-1696. doi: 10.1038/s41587-020-0465-8. URL https://doi.org/10.1038/s41587-020-0465-8. +Dwibedi, D., Aytar, Y., Thompson, J., Sermanet, P., and Zisserman, A. With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations, + +2021. URL https://arxiv.org/abs/2104.14548. +Eraslan, G., Drokhlyansky, E., Anand, S., Fiskin, E., Subramanian, A., Slyper, M., Wang, J., Van Wittenberghe, N., Rouhana, J. M., Waldman, J., et al. Single-nucleus cross-tissue molecular reference maps toward understanding disease gene function. Science, 376(6594):eabl4290, 2022. +Gayoso, A., Steier, Z., Lopez, R., Regier, J., Nazor, K. L., Streets, A., and Yosef, N. Joint probabilistic modeling of single-cell multi-omic data with totalVI. Nature Methods, 18(3):272-282, Mar 2021. ISSN 1548-7105. doi: 10.1038/s41592-020-01050-x. URL https://doi.org/10.1038/s41592-020-01050-x. +Geiping, J., Garrido, Q., Fernandez, P., Bar, A., Piriaviash, H., LeCun, Y., and Goldblum, M. A Cookbook of Self-Supervised Learning, 2023. URL https://arxiv.org/abs/2304.12210. +Grandi, F. C., Modi, H., Kampman, L., and Corces, M. R. Chromatin accessibility profiling by ATAC-seq. Nature Protocols, 17(6):1518-1552, Jun 2022. ISSN 1750-2799. doi: 10.1038/s41596-022-00692-9. URL https://doi.org/10.1038/s41596-022-00692-9. +Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. Bootstrap your own latent-a new approach to self-supervised learning. NeurIPS, 33: 21271-21284, 2020. +Han, W., Cheng, Y., Chen, J., Zhong, H., Hu, Z., Chen, S., Zong, L., Hong, L., Chan, T.-F., King, I., Gao, X., and Li, Y. Self-supervised contrastive learning for integrative single cell RNA-seq data analysis. Briefings in Bioinformatics, 23(5):bbac377, 09 2022. ISSN 1477-4054. doi: 10.1093/bib/bbac377. +Hao, Y., Hao, S., Andersen-Nissen, E., Mauck, W. M., Zheng, S., Butler, A., Lee, M. J., et al. Integrated analysis of multimodal single-cell data. Cell, 184(13): 3573-3587.e29, 2021. ISSN 0092-8674. doi: https://doi.org/10.1016/j.cell.2021.04.048. URL https://www.sciencedirect.com/science/article/pii/S0092867421005833. +Hao, Y., Stuart, T., Kowalski, M. H., Choudhary, S., Hoffman, P., et al. Dictionary learning for integrative, multimodal and scalable single-cell analysis. Nature Biotechnology, 42(2):293-304, Feb 2024. ISSN 1546-1696. doi: 10.1038/s41587-023-01767-y. URL https://doi.org/10.1038/s41587-023-01767-y. + +He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729-9738, 2020. +Heryanto, Y. D., Zhang, Y.-z., and Imoto, S. Predicting cell types with supervised contrastive learning on cells and their types. Scientific Reports, 14(1):430, Jan 2024. ISSN 2045-2322. doi: 10.1038/s41598-023-50185-2. URL https://doi.org/10.1038/s41598-023-50185-2. +Heumos, L., Schaar, A. C., Lance, C., Litinetskaya, A., Drost, F., Zappia, L., Lucken, M. D., Strobl, D. C., Henao, J., Curion, F., et al. Best practices for single-cell analysis across modalities. Nature Reviews Genetics, 24(8):550-572, 2023. +Hu, C., Li, X., Liu, D., Wu, H., Chen, X., Wang, J., and Liu, X. Teacher-Student Architecture for Knowledge Distillation: A Survey, 2023. URL https://arxiv.org/abs/2308.04268. +Jing, L., Vincent, P., LeCun, Y., and Tian, Y. Understanding Dimensional Collapse in Contrastive Self-supervised Learning. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=YevsQ05DEN7. +Jones, R. C., Karkanias, J., Krasnow, M. A., Pisco, A. O., Quake, S. R., et al. The Tabula Sapiens: A multiple-organ, single-cell transcriptomic atlas of humans. Science, 376 (6594):eabl4896, 2022. doi: 10.1126/science.abl4896. URL https://www.science.org/doi/abs/10.1126/science.abl4896. +Kingma, D. P. and Ba, J. Adam: A Method for Stochastic Optimization, 2017. URL https://arxiv.org/abs/s/1412.6980. +Lähnemann, D., Köster, J., Szczurek, E., McCarthy, D. J., Hicks, S. C., Robinson, M. D., Vallejos, C. A., Campbell, K. R., Beerenwinkel, N., Mahfouz, A., et al. Eleven grand challenges in single-cell data science. Genome biology, 21(1):1-35, 2020. +Lance, C., Luecken, M. D., Burkhardt, D. B., Cannoodt, R., Rautenstrauch, P., Laddach, A., Ubingazhibov, A., Cao, Z.-J., Deng, K., Khan, S., Liu, Q., Russkikh, N., et al. Multimodal single cell data integration challenge: Results and lessons learned. In NeurIPS Competitions and Demonstrations Track, volume 176, pp. 162-176, 2022. URL https://proceedings.mlr.press/v176/lance22a.html. + +Li, Y., Zhang, D., Yang, M., Peng, D., Yu, J., Liu, Y., Lv, J., Chen, L., and Peng, X. scBridge embraces cell heterogeneity in single-cell RNA-seq and ATAC-seq data integration. Nature Communications, 14(1):6045, Sep 2023. ISSN 2041-1723. doi: 10.1038/s41467-023-41795-5. URL https://doi.org/10.1038/s41467-023-41795-5. +Li, Y., Lin, Y., Hu, P., Peng, D., Luo, H., and Peng, X. Single-Cell RNA-Seq Debiased Clustering via Batch Effect Disentanglement. IEEE Transactions on Neural Networks and Learning Systems, 35(8):11371-11381, 2024. doi: 10.1109/TNNLS.2023.3260003. +Liu, J., Zeng, W., Kan, S., Li, M., and Zheng, R. CAKE: a flexible self-supervised framework for enhancing cell visualization, clustering and rare cell identification. Briefings in Bioinformatics, 25(1):bbad475, 2024. +Lopez, R., Regier, J., Cole, M. B., Jordan, M. I., and Yosef, N. Deep generative modeling for single-cell transcriptomics. Nature Methods, 15(12):1053-1058, Dec 2018. ISSN 1548-7105. doi: 10.1038/s41592-018-0229-2. +Lotfollahi, M., Naghipourfar, M., Luecken, M. D., Khajavi, M., Böttner, M., Wagenstetter, M., Avsec, Z., Gayoso, A., Yosef, N., Interlandi, M., Rybakov, S., Misharin, A. V., and Theis, F. J. Mapping single-cell data to reference atlases by transfer learning. Nature Biotechnology, 40(1): 121-130, Jan 2022. ISSN 1546-1696. doi: 10.1038/s4 1587-021-01001-7. URL https://doi.org/10.1 038/s41587-021-01001-7. +Luecken, M., Burkhardt, D., Cannoodt, R., Lance, C., Agrawal, A., Aliee, H., Chen, A., Deconinck, L., Detweiler, A., Granados, A., et al. A sandbox for prediction and integration of DNA, RNA, and proteins in single cells. In Vanschoren, J. and Yeung, S. (eds.), NeurIPS Track on Datasets and Benchmarks, volume 1, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/158f3069a435b314a80bdbc024f8e422-Paper-round2.pdf. +Luecken, M. D., Böttner, M., Chaichoompu, K., Danese, A., Interlandi, M., Müller, M. F., Strobl, D. C., Zappia, L., Dugas, M., Colomé-Tatché, M., et al. Benchmarking atlas-level data integration in single-cell genomics. Nature Methods, 19(1):41-50, 2022. +Marks, M., Knott, M., Kondapaneni, N., Cole, E., Defraeye, T., Perez-Cruz, F., and Perona, P. A closer look at benchmarking self-supervised pre-training with image classification. International Journal of Computer Vision, Apr 2025. ISSN 1573-1405. doi: 10.1007/s11263-025-02402-w. URL https://doi.org/10.1007/s11263-025-02402-w. + +Min, B., Ross, H., Sulem, E., Veyseh, A. P. B., Nguyen, T. H., Sainz, O., Agirre, E., Heintz, I., and Roth, D. Recent advances in natural language processing via large pre-trained language models: A survey. ACM Computing Surveys, 56(2):1-40, 2023. doi: 10.1145/3605943. URL https://doi.org/10.1145/3605943. +Pearson, K. LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science, 2(11): 559-572, 1901. +Polański, K., Young, M. D., Miao, Z., Meyer, K. B., Teichmann, S. A., and Park, J.-E. BBKNN: Fast Batch Alignment of Single Cell Transcriptomes. Bioinformatics, 2019. doi: 10.1093/bioinformatics/btz625. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML, pp. 8748-8763, 2021. +Richter, T., Bahrami, M., Xia, Y., Fischer, D. S., and Theis, F. J. Delineating the effective use of self-supervised learning in single-cell genomics. Nature Machine Intelligence, Dec 2024a. ISSN 2522-5839. doi: 10.1038/s42256-024-00934-3. URL https://doi.org/10.1038/s42256-024-00934-3. +Richter, T., Bahrami, M., Xia, Y., Fischer, D. S., and Theis, F. J. Delineating the Effective Use of Self-Supervised Learning in Single-Cell Genomics. bioRxiv, 2024b. doi: 10.1101/2024.02.16.580624. URL https://www.bi orxiv.org/content/early/2024/02/18/2 024.02.16.580624. +Schiappa, M. C., Rawat, Y. S., and Shah, M. Self-supervised learning for videos: A survey. ACM Computing Surveys, 55(13s):1-37, 2023. doi: 10.1145/3577925. URL https://doi.org/10.1145/3577925. +Sikkema, L., Ramírez-Suástegui, C., Strobl, D. C., Gillett, T. E., Zappia, L., Madissoon, E., et al. An integrated cell atlas of the lung in health and disease. Nature Medicine, 29(6):1563-1577, Jun 2023. ISSN 1546-170X. doi: 10.1038/s41591-023-02327-2. URL https://doi.org/10.1038/s41591-023-02327-2. +Slyper, M., Porter, C. B., Ashenberg, O., Waldman, J., Drokhlyansky, E., Wakiro, I., Smillie, C., Smith-Rosario, G., Wu, J., Dionne, D., et al. A single-cell and single-nucleus RNA-Seq toolbox for fresh and frozen human tumors. Nature Medicine, 26(5):792-802, 2020. +Sohn, K. Improved deep metric learning with multi-class N-pair loss objective. In NeurIPS, pp. 1857-1865, 2016. ISBN 9781510838819. + +Stoeckius, M., Hafemeister, C., Stephenson, W., Houck-Loomis, B., Chattopadhyay, P. K., Swerdlow, H., Satija, R., and Smibert, P. Simultaneous epitope and transcriptome measurement in single cells. Nature Methods, 14(9):865-868, Sep 2017. ISSN 1548-7105. doi: 10.1038/nmeth.4380. URL https://doi.org/10.1038/nmeth.4380. +Susmelj, I., Heller, M., Wirth, P., Prescott, J., Ebner, M., and et al. Lightly, 2023. URL https://github.com/lightly-ai/lightly. +Swanson, E., Lord, C., Reading, J., Heubeck, A. T., Savage, A. K., Green, R., Li, X.-j., Torgerson, T. R., Bumol, T. F., Graybuck, L. T., and Skene, P. J. TEA-seq: a trimodal assay for integrated single cell measurement of transcription, epitopes, and chromatin accessibility. bioRxiv, 2020. doi: 10.1101/2020.09.04.283887. URL https://www.biorxiv.org/content/early/2020/11/16/2020.09.04.283887. +Tang, F., Barbacionu, C., Wang, Y., Nordman, E., Lee, C., Xu, N., Wang, X., Bodeau, J., Tuch, B. B., Siddiqui, A., Lao, K., and Surani, M. A. mRNA-Seq whole Transcriptome analysis of a single cell. Nature Methods, 6(5):377-382, May 2009. ISSN 1548-7105. doi: 10.1038/nmeth.1315. URL https://doi.org/10.1038/nmeth.1315. +Tang, Z., Chen, G., Chen, S., Yao, J., You, L., and Chen, C. Y.-C. Modal-nexus auto-encoder for multi-modality cellular data integration and imputation. Nature Communications, 15(1):9021, Oct 2024. ISSN 2041-1723. doi: 10.1038/s41467-024-53355-6. URL https://doi.org/10.1038/s41467-024-53355-6. +Theodoris, C. V., Xiao, L., Chopra, A., Chaffin, M. D., Al Sayed, Z. R., Hill, M. C., Mantineo, H., Brydon, E. M., Zeng, Z., Liu, X. S., et al. Transfer learning enables predictions in network biology. Nature, 618(7965):616-624, 2023. +Toma, P., Ovcharenko, O., Daunhawer, I., Vogt, J. E., Barkmann, F., and Boeva, V. Benchmarking self-supervised learning for single-cell data. In NeurIPS 2024 Workshop: Self-Supervised Learning-Theory and Practice, 2024. +Tran, H. T. N., Ang, K. S., Chevrier, M., Zhang, X., Lee, N. Y. S., Goh, M., and Chen, J. A benchmark of batch-effect correction methods for single-cell RNA sequencing data. Genome Biology, 21(1):12, Jan 2020. ISSN 1474-760X. doi: 10.1186/s13059-019-1850-9. URL https://doi.org/10.1186/s13059-019-1850-9. +van den Oord, A., Li, Y., and Vinyals, O. Representation Learning with Contrastive Predictive Coding, 2019. URL https://arxiv.org/abs/1807.03748. + +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. Advances in neural information processing systems, 30, 2017. +Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., and Bailey, J. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 322-330, 2019. +Wolf, F. A., Angerer, P., and Theis, F. J. SCANPY: large-scale single-cell gene expression data analysis. Genome Biology, 19(1):15, Feb 2018. ISSN 1474-760X. doi: 10.1186/s13059-017-1382-0. URL https://doi.org/10.1186/s13059-017-1382-0. +Xiong, L., Chen, T., and Kellis, M. scCLIP: Multi-modal Single-cell Contrastive Learning Integration Pre-training. In NeurIPS 2023 AI for Science Workshop, 2023. URL https://openreview.net/forum?id=KmtM5ZHxct. +Xu, C., Lopez, R., Mehlman, E., Regier, J., Jordan, M. I., and Yosef, N. Probabilistic harmonization and annotation of single-cell transcriptomics data with deep generative models. Molecular Systems Biology, 17(1):e9620, 2021. doi: https://doi.org/10.15252/msb.20209620. URL https://www.embopress.org/doi/abs/10.15252/msb.20209620. +Xue, Y., Gan, E., Ni, J., Joshi, S., and Mirzasoleiman, B. Investigating the Benefits of Projection Head for Representation Learning. In ICLR, 2024. URL https://arxiv.org/abs/2403.11391. +Yan, X., Zheng, R., Wu, F., and Li, M. CLAIRE: contrastive learning-based batch correction framework for better balance between batch mixing and preservation of cellular heterogeneity. Bioinformatics (Oxford, England), 39, 02 2023. doi: 10.1093/bioinformatics/btad099. +Yang, F., Wang, W., Wang, F., Fang, Y., Tang, D., Huang, J., Lu, H., and Yao, J. scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data. Nature Machine Intelligence, 4(10): 852-866, 2022a. +Yang, M., Yang, Y., Xie, C., Ni, M., Liu, J., Yang, H., Mu, F., and Wang, J. Contrastive learning enables rapid mapping to multimodal single-cell atlas of multimillion scale. Nature Machine Intelligence, 4(8):696-709, 2022b. +Yu, X., Xu, X., Zhang, J., and Li, X. Batch alignment of single-cell transcriptomics data using deep metric learning. Nature Communications, 14(1):960, 2023. doi: 10.1038/s41467-023-36635-5. + +Zbontar, J., Jing, L., Misra, I., LeCun, Y., and Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. In International conference on machine learning, pp. 12310-12320. PMLR, 2021. +Zhang, J. and Ma, K. Rethinking the augmentation module in contrastive learning: Learning hierarchical augmentation invariance with expanded views. In Proceedings of the IEEE/CVF Conference on Computer Vision and + +Pattern Recognition, pp. 16650-16659, 2022. +Zhang, Z., Mathew, D., Lim, T. L., Mason, K., Martinez, C. M., Huang, S., Wherry, E. J., Susztak, K., Minn, A. J., Ma, Z., and Zhang, N. R. Recovery of biological signals lost in single-cell batch integration with cellanova. Nature Biotechnology, Nov 2024. ISSN 1546-1696. doi: 10.1038/s41587-024-02463-1. URL https://doi.org/10.1038/s41587-024-02463-1. + +# A. Generic Self-Supervised Methods + +In contrastive SSL, different augmentations/modalities of the same instance are used to create positive pairs (i.e., similar examples), while pairs of distinct instances represent negative pairs (i.e., dissimilar examples) (Chen et al., 2020). Non-contrastive methods, also called negative-free contrastive learning, leverage only positive pairs (Cao et al., 2024a). + +Contrastive Methods: A common framework for contrastive learning is SimCLR (Chen et al., 2020). Originally, SimCLR applies three image-data-specific augmentations to create positive/negative pairs and maximizes agreement between different augmented views via a temperature-scaled cross-entropy loss (NTXent) (Sohn, 2016) loss. MoCo (He et al., 2020) uses an additional momentum encoder for the augmented views. The key advantage of Moco is improved memory efficiency. NNCLR (Dwibedi et al., 2021) samples nearest neighbors of an instance to define positive pairs. Both MoCo and NNCLR contrast via InfoNCE (van den Oord et al., 2019) loss. SimCLR (Chen et al., 2020), MoCo (He et al., 2020), and NNCLR (Dwibedi et al., 2021) rely on positive and negative samples. + +Non-Contrastive Methods: The emergence of non-contrastive methods was facilitated by an improved understanding of instabilities during model training. BYOL (Grill et al., 2020) uses online (trainable) and target (fixed) networks to train a representation. Online and target networks receive actual samples and augmented views respectively. The target network is updated using exponential moving averages of weights of the previous online networks. BYOL (Grill et al., 2020) minimizes the similarity of two representations produced by both networks. Additionally, using momentum encoders in MoCo (He et al., 2020) and BYOL (Grill et al., 2020) helps against dimensionality collapse from, for instance, the lack of negative pairs. Dimensionality collapse appears when embeddings span in a lower-dimensional subspace instead of the entire space (Jing et al., 2022). SimSiam, like SimCLR, uses an encoder with shared weights to process two augmented views. One encoder output (left) is fed to an additional predictor network before maximizing similarity, while the other output (right) is used directly. The encoder and predictor are updated with only left path gradients. SimSiam (Chen & He, 2020) and BYOL (Grill et al., 2020) use a predictor network to achieve better performance and avoid representation collapse without leveraging negative pairs. Barlow Twins (Zbontar et al., 2021) uses augmentations to train two identical networks and calculates a cross-correlation matrix (Zbontar et al., 2021) between the trained representations to reduce redundancy of the embeddings. VICReg (Bardes et al., 2022) is a joint embedding architecture with variance, invariance, and covariance regularization. The authors introduce regularization terms to the loss to control the variance of embeddings and decorrelate latent variables. + +# B. Datasets + +All datasets used in our benchmark are publicly available. + +Human Immune Cells (HIC): This dataset comprises 33,506 cells and includes 12,303 genes from ten different donors assembled by Luecken et al.(2022) from five studies. One study derived cells from the human bone marrow and the other four from the human peripheral blood. There are 16 cell types annotated in the dataset. Availability: https://doi.org/10.6084/m9.figshare.12420968.v8 + +Mouse Cell Atlas (MCA): This dataset comprises 6,954 cells collected across two studies (Tran et al., 2020) with the first study consisting of 4,239 cells and the second batch containing 2,715 cells. Three different sequencing protocols were used. The harmonized dataset contains 51,817 genes and eleven cell types. Availability: https://ndownloader.figshare.com/files/10351110 and https://ndownloader.figshare.com/files/10760158 + +Peripheral Blood Mononuclear Cells (PBMC): Collected by Ding et al.(2020), this dataset contains 30,449 cells from two patients and includes 33,694 genes. Cells were sequenced with seven different protocols (10x Chromium (v2), 10x Chromium (v3), Drop-seq, inDrops, Chromium (v3), Seq-Well, CEL-Seq2). We have made use of the annotations of nine unique cell types (D4+ T cell, Cytotoxic T cell, Natural killer cell, CD16+ monocyte, CD14+ monocyte, Megakaryocyte, B cell, Dendritic cell, Plasmacytoid dendritic cell) provided in the original study. Also, we removed the unassigned cells. Availability: https://singlecell.broadinstitute.org/single_cell/study/SCP424/single-cell-comparison-pbmc-data + +Pancreas: This dataset was collected by Tran et al.(2020) combining five studies of the human pancreas. It comprises 14,767 cells, with 5,975 genes shared across all studies, sequenced by four scRNA-seq technologies (inDrop, CEL-Seq2, Smart-Seq2, SMARTer). The harmonized dataset contains 13 cell types (alpha, beta, ductal, acinar, delta, pancreatic stellate, pancreatic polypeptide, endothelial, macrophage, mast, epsilon, Schwann and T cell). Availability: https://figshare.com/ndownloader/files/24539828 + +Lung: This dataset contains 32,426 cells across 16 batches and two technologies (Drop-seq and $10\mathrm{x}$ Chromium), assembled by Luecken et al.(2022) from three labs. The harmonized dataset includes 15,148 genes. The cells are derived from transplant patients and lung biopsies and are annotated as 17 cell types. Availability: https://figshare.com/ndo. nwnloader/files/24539942 + +Immune Cell Atlas: This dataset contains 329,762 cells and includes 36,398 genes across twelve batches and three different sequencing technologies (10x 5' v1, 10x 5' v2, 10x 3' v3), collected by Conde et al.(2022). The cells originate from 16 different tissues. The annotations include 35 fine-grain cell types. Availability: https://datasets.cellxgene.cziscience.com/08f58b32-a01b-4300-8ebc-2b93c18f26f7.h5ad + +Tabula Sapiens: This dataset was collected by Jones et al.(2022) and contains 1,136,218 cells from 24 tissues and organs, sequenced with $10\mathrm{x}3^{\prime}\mathrm{v}3$ $10\mathrm{x}5^{\prime}\mathrm{v}2$ , Smart-seq, and Smart-seq3 protocols. Tabula Sapiens is a molecular reference atlas for more than 400 cell types of the human body. Availability: https://cellxgene.cziscience.com/collectio ns/e5f58829-1a66-40b5-a624-9046778e74f5 + +Multi-modal Peripheral Blood Mononuclear Cells (PBMC-M): This dataset was collected by Hao et al.(2021) with 161,764 cells across eight batches. For each cell, two modalities are available: RNA and protein. RNA has 18,702 genes, while the dimension of protein is 224. As a pre-processing step, we merge different T cell granularities, similar to the Concerto framework (Yang et al., 2022b). Availability: https://atlas.fredhutch.org/data/nygc/multimodal/pbmcMULTIMODAL.h5seurat + +Multi-modal Bone Marrow Mononuclear Cells (BMMC). This dataset was collected by Luecken et al.(2021) and contains 90,261 cells across thirteen batches and twelve healthy human donors (Lance et al., 2022). Each cell has two modalities: Gene expression (GEX) and protein abundance (ADT). While GEX has 13,953 genes, the protein abundance dimension is 134. Pre-processing is the same as PBMC-M. Availability: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE194122 + +# C. Evaluation Details + +Preprocessing: All datasets are preprocessed using SCANPY (Wolf et al., 2018) normalize-total function, which scales the total counts per cell to 10,000, followed by log-transformation. We subsequently perform batch-aware feature selection to choose the 4,000 most highly-variable genes (HVGs) for further processing. For multi-modal PBMC-M and BMMC datasets, we select 2,000 HVGs contrary to 4,000 HVGs for the single modality datasets. + +Batch Correction: The evaluated metrics are divided into two categories: those that measure the conservation of biological variance and that that measure the batch correction (Tran et al., 2020; Luecken et al., 2022). To evaluate conservation of biological variation, we calculate the isolated labels score, the Leiden NMI and ARI, the silhouette label score, and the cLISI metric. To evaluate batch correction, we calculate the graph connectivity, kBET per label, iLISI for each cell, the PCR comparison score, and the silhouette coefficient per batch. For details and definitions of the used evaluation metrics, as well as their implementation, we refer to (Luecken et al., 2022). + +Cell-Type Annotation and Missing Modality Prediction: In the PBMC-M dataset, for cell-type annotation mapping and missing modality inference, we hold out batches $P3$ , $P5$ , and $P8$ . In the BMMC dataset, for cell-type annotation mapping and missing modality inference, we hold out batches $s4d1$ , $s4d8$ , and $s4d9$ . Similar to the approach of (Xu et al., 2021), we perform cell-type annotation by fitting a non-parametric supervised classifier (k-nearest neighbors (KNN) classifier with $k = 11$ ). For missing modality prediction, we fit a KNN classifier with $k = 5$ , as in (Yang et al., 2022b). + +# D. Hyperparameter Tuning + +In all experiments, we use the augmentation pipeline proposed by CLEAR (Han et al., 2022) as a foundation, unless stated differently. Experiments described in this section were computed for all methods except Concerto. For the latter, we use the original model from (Yang et al., 2022b). + +Optimization: All models in this benchmark, except Concerto, were trained with the Adam optimizer (Kingma & Ba, 2017). We use a stepwise learning rate schedule with base learning rate $1 \mathrm{e} - 4$ and fix the batch size at 256. When applicable, the memory bank size was set to 2048. + +Encoder Architecture: We fix the encoder across all architectures and only perform a hyper-parameter search on the + +dimensionality of the encoder output, i.e., the representation dimensionality. The encoder consists of a fully connected layer reducing the dimensionality to 128, followed by a ReLU activation and batch normalization. A further fully connected layer encodes the hidden representation to the representation dimension, followed by batch normalization. + +Projector Dimensionality: Projection heads benefit self-supervised models in learning robust representations (Xue et al., 2024). At inference, the projection head is discarded, and only the (backbone) encoder is used for inference. All evaluated architectures subject to our evaluation include a projection head. We perform a hyperparameter search to find the best output dimension of the projector. + +All projection heads were implemented as noted in the respective works. In their respective works, SimCLR, MoCo, SimSiam, and NNCLR are evaluated with projectors that retain or scale down the dimensionality of the representation. BarlowTwins, BYOL, and VICReg are evaluated with projectors that retain or scale up the dimensionality. We follow this rationale and search a grid of scaling factors $\{1,2,4\}$ . To compute the projection dimensionality, the scaling factor is either divided (scale-down models) or multiplied (scale-up models) with the representation's dimension. + +Regularization Hyperparameters: Variance-invariance-covariance regularization hyperparameters are used as is done in the original work. We evaluate a grid of parameters, where the invariance term and the variance term $\lambda$ , $\alpha = \{5, 10, 25, 50\}$ , while the invariance term $\beta$ is fixed to 1. We find that $\lambda$ and $\alpha$ fixed to 5 perform well across both ablation datasets. + +Augmentation Strength: Augmentations are known to benefit SSL models in finding robust representations. Details of the evaluated augmentations are listed in Appendix E. We perform a grid search to optimize the hyperparameters for all augmentations. This includes $\alpha$ for all models, $\sigma$ for the Gaussian Noise augmentation, and the $KNN$ -size for the nearest-neighbor-based transforms MNN and BBKNN. For each augmentation, the original CLEAR hyperparameters are fixed, and only the hyperparameters of the evaluated augmentation are adapted. For the ablation of BBKNN, we remove CrossOver, and replace it by BBKNN. Due to the implementation of MNN, we remove CrossOver and insert MNN at the front of the augmentation pipeline. Results of the ablation are recorded in Figure G6. + +# E. Augmentations + +We evaluate six augmentations in this work. For all, the parameter $\alpha$ defines the proportion of values affected by the transform. Augmentations are applied sequentially. Masking is performed by setting gene expressions to zero. Gaussian noise computes a noise vector computed from the normal distribution (with zero mean and standard deviation $\sigma$ ) and adds it to the input. InnerSwap switches expressions between genes within a cell, while CrossOver switches expressions of the same gene between two random cells. + +The MNN augmentation refers to our implementation of CLAIRE's augmentation (Yan et al., 2023). For each cell, it computes an intra- and inter-batch neighborhood based on its mutual nearest neighbors. Then, views are computed by interpolating between neighbors. We do not filter cell-neighborhoods based on representation similarities during early stages of model training, as is done in the original work. This work introduces the BBKNN augmentation. It uses a non-trimmed batch-balanced KNN graph (Polanski et al., 2019) to define a set of #batches $\times$ KNN neighbors for each cell. Views are computed by interpolating between neighbors. It differs from CLAIRE's concept in that it does not distinguish between intra- and inter-batch neighbors. While MNN always produces a view based on neighbors within and a view based on neighbors from outside the batch, this is not the case for BBKNN. Due to its implementation, the MNN augmentation is limited to be applied first in any augmentation pipeline. We refer to (Yan et al., 2023) for further detail on the interpolation process. + +# F. Multi-Modal Setting + +Recent developments in the single-cell analysis allow the measurement of multiple aspects of a cellular state. Data containing multiple modalities of a cell, e.g., RNA and protein, is called multi-omics. Existing self-supervised methods for single-cell data integration can be extended to the multimodal setting by combining views produced by specialized models for different modalities. We train two models for each modality; each model consists of an encoder and projector. As is common (He et al., 2020; Chen et al., 2020; Geiping et al., 2023), only the encoder is used to infer the integrated representation. However, in the single-cell community, the projector is also used during inference, and, therefore, we also evaluate whether projection during prediction improves performance. Additionally, there are various techniques to combine representations (Radford et al., 2021; Xiong et al., 2023; Yang et al., 2022b). We evaluate three approaches: Addition, concatenation, and CLIP. + +Encoder & Projector Embedding Evaluation: Using CLEAR augmentations, we train two models for each modality, each + +consisting of an encoder and projector. In Table H7, we compare data integration performance with and without a projector during inference. Interestingly, SimCLR benefits from projection, while VICReg performance degrades. We conclude that the effect of projection is inconsistent across models. + +# G. Supplementary Figures + +![](images/b154108f71bd192cf90aed9fee0cffc0744b3b6b1031e51797bcdb4b567d0029.jpg) + +![](images/10bf2384c0adb4017a1f0d7c9ec9de97e2d024b9863675199e0c8c9475c1ebe3.jpg) + +![](images/c1f6ea417b6676513a0f4830444ddd06d6cd6fff2fd9aff837adc84b74c44e74.jpg) + +![](images/3eb0cb5089f54affce45203e94b4cd19a7c5734566a68edfead70118ea4bce9e.jpg) + +![](images/684f421fc7fee1c01aa62304f2969d0c99f707e385326d1a595912378ef9e242.jpg) +Figure G1. Overview of considered methods. Dotted lines between the encoder and projector blocks represent weight sharing. Exponential Moving Average (EMA) denotes the updating of weights with momentum. This figure was inspired by (Bardes et al., 2022; He et al., 2020; Dwibedi et al., 2021) based on our implementation of models with LightlySSL (Susmelj et al., 2023). + +![](images/b8aef0ca1de81b3fdae1994343bb620a1d883a972b451fe0dfbf6a68c5198d31.jpg) + +![](images/89b4b8090181ec3a2a1bb4e660ec6202d7ec9f400366b1ad0b89496ecee7db2a.jpg) + +![](images/c2689554183f461ddb7c55c07ec85b92152cf6c486185531260f99c633bfbb2d.jpg) + +![](images/0dedd977d6011370efc67d71af7cbe623394fc4c8e83cc5e5615363ddb3b2820.jpg) +Figure G2. Batch Correction. The uncorrected (red) figure shows that cells cluster rather by batch (technical noise) than cell type (true biological signal) before the batch correction. After training a model and learning a corrected representation (green), cells are grouped by cell type, and batches are mixed. + +![](images/6bb64e2450c99d786b5b8a4f3cdb84242139b712deda575ecef32f894c3b4590.jpg) +Figure G3. Query-to-Reference. Model gets an annotated train dataset (reference, pink input) as input and learns the corresponding latent space. During inference, representations of train (reference) and hold-out sets (query), train cell types, and a classifier are used to predict cell types of hold-out data (query, blue result). + +![](images/f69566bf601f58b5920043fd01393d426ff5917bfb47806c5292055be7b819a1.jpg) +Figure G4. Tuning of the encoder based on the representation dimensionality. The encoder architecture is defined in Appendix D. Lines correspond to the mean total score across five runs with unique seeds. + +![](images/358d033b0c17e58239d383d7492b0f8c89397ca4b1a3a3a3fc0e2d253d094b1c.jpg) + +![](images/93f23c197ebe730c77df11b0dc9d6a3565493b8ebf4fee4706e15075ba987b5f.jpg) +Scale Factor + +![](images/d27e41ddf3d33453322b17fde7fb958289537ede0b62e5a6f3158b775ff4be7e.jpg) + +![](images/a20f7d7edc160bbb36b88856ab55f506cfd2d364179ae3987368968537f25d93.jpg) +Figure G5. Tuning of the projector. The scale factor is defined in Appendix D: in contrastive methods, the projected size decreased according to the scale factor, while for non-contrastive methods the projection size increases in accordance with the scale factor. Lines correspond to the mean total score across five runs with unique seeds. + +![](images/ebfee1c18238702608856cffb8ae8a0fc3c1c9f833599a27f8439dd880df966a.jpg) +Alpha + +![](images/4d52c5627877076153b3b90c7e7a68016fe52d8b64fc3a80190aa406f4ed1fd9.jpg) +Figure G6. Ablation on the augmentation hyperparameters. The figure aggregates results for all methods, trained on the HIC dataset. + +![](images/0828c0b44959f4afc872637a8e6b2be730cf7a60caf9fcc5b5892f6dd77a344d.jpg) +(a) SimCLR + +![](images/6747f38b8f1160257dd907437b248e3b99f4510eb9918ea3b6963efbc3c4e539.jpg) +(b) MoCo +Figure G7. Evaluation of individual and combined data augmentations based on total score for batch correction for SimCLR and MoCo method. Diagonal entries correspond to a single augmentation, and off-diagonal entries correspond to the two sequentially applied augmentations. + +# H. Supplementary Tables + +Table H1. Batch correction benchmark for methods trained using the CLEAR pipeline. This table is an extension to Table 1, containing datasets that were used during hyperparameter tuning. + +
MethodMCAHICLung
BioBatchTotalBioBatchTotalBioBatchTotal
SimCLR0.5190.6660.5780.7530.5360.6660.1840.6280.362
± 0.032± 0.026± 0.021± 0.027± 0.013± 0.018± 0.005± 0.025± 0.007
MoCo0.2800.7910.4840.7910.6030.7160.1480.6780.360
± 0.050± 0.020± 0.034± 0.014± 0.014± 0.008± 0.000± 0.004± 0.001
SimSiam0.1540.6730.3620.5850.4870.5460.1000.6540.322
± 0.043± 0.033± 0.036± 0.043± 0.055± 0.038± 0.019± 0.055± 0.011
NNCLR0.3320.6650.4650.6950.5000.6170.1470.6620.353
± 0.084± 0.035± 0.060± 0.019± 0.008± 0.010± 0.002± 0.019± 0.008
BYOL0.0000.6940.2770.5820.7070.6320.0280.7050.299
± 0.009± 0.037± 0.016± 0.021± 0.027± 0.022± 0.003± 0.003± 0.001
VICReg0.5150.7090.5920.8300.5810.7300.2080.6220.374
± 0.018± 0.014± 0.011± 0.024± 0.013± 0.016± 0.008± 0.003± 0.004
Barlow0.4710.6270.5330.7840.5330.6840.1730.6510.364
Twins± 0.032± 0.029± 0.026± 0.011± 0.004± 0.008± 0.007± 0.010± 0.000
Concerto0.4970.5590.5220.0000.6270.2510.1520.7110.376
± 0.036± 0.013± 0.027± 0.000± 0.004± 0.001± 0.008± 0.008± 0.008
CLEAR0.6840.3850.5650.6150.2620.4740.8760.3260.656
± 0.042± 0.008± 0.027± 0.004± 0.007± 0.005± 0.088± 0.049± 0.050
CLAIRE0.4670.9780.6720.5110.9740.6960.3150.9070.552
± 0.015± 0.007± 0.010± 0.024± 0.000± 0.014± 0.324± 0.058± 0.176
scVI0.7230.2540.5360.7680.6590.7250.7160.6590.693
± 0.021± 0.017± 0.019± 0.007± 0.003± 0.006± 0.176± 0.017± 0.111
PCA0.5380.2970.4420.6350.1640.4460.7480.2450.547
± 0.027± 0.000± 0.016± 0.014± 0.003± 0.009± 0.114± 0.062± 0.093
+ +Table H2. Uni-modal cell-typing with CLEAR augmentations. We define one technology (10X 5' v2) of the Immune Cell Atlas as a hold-out set, train the encoder and knn-classifier. The generic model VICReg outperforms all other methods. + +
MethodImmune Cell Atlas (10X 5' v2)
Macro F1Acc
SimCLR0.788 ± 0.0040.830 ± 0.003
MoCo0.794 ± 0.0060.835 ± 0.014
SimSiam0.711 ± 0.0150.768 ± 0.007
NNCLR0.740 ± 0.0100.804 ± 0.007
BYOL0.680 ± 0.0020.724 ± 0.009
VICReg0.820 ± 0.0120.866 ± 0.003
Barlow Twins0.727 ± 0.0040.752 ± 0.007
CLEAR0.806 ± 0.0000.855 ± 0.000
CLAIRE0.436 ± 0.0000.492 ± 0.002
scGPT (zero-shot)0.358 ± 0.0040.439 ± 0.001
scGPT (finetuned)0.835 ± 0.0080.850 ± 0.008
Geneformer (finetuned)0.831 ± 0.0000.878 ± 0.000
scBERT0.818 ± 0.0090.873 ± 0.004
scVI0.750 ± 0.0010.804 ± 0.000
SCDC0.595 ± 0.0090.642 ± 0.000
PCA0.071 ± 0.0070.124 ± 0.000
+ +Table H3. Cell-type annotation with CLEAR augmentations on the Pancreas dataset. We define individual studies as holdout sets during training. Accuracy and Macro F1 are computed on the holdout set. + +
MethodMutaro et al.Segerstolpe et al.Wang et al.Xin et al.
Macro F1AccMacro F1AccMacro F1AccMacro F1Acc
SimCLR0.796 ± 0.0140.936 ± 0.0120.781 ± 0.0220.946 ± 0.0110.894 ± 0.0050.945 ± 0.0100.778 ± 0.0500.819 ± 0.019
MoCo0.825 ± 0.0450.939 ± 0.0130.799 ± 0.0330.936 ± 0.0060.844 ± 0.0200.919 ± 0.0210.722 ± 0.0450.798 ± 0.010
SimSiam0.679 ± 0.0420.875 ± 0.0260.595 ± 0.0220.803 ± 0.0140.586 ± 0.0360.798 ± 0.0220.464 ± 0.0710.718 ± 0.037
NNCLR0.707 ± 0.0430.897 ± 0.0240.619 ± 0.0130.854 ± 0.0120.729 ± 0.0190.868 ± 0.0120.472 ± 0.0330.721 ± 0.025
BYOL0.720 ± 0.0370.884 ± 0.0210.670 ± 0.0050.854 ± 0.0060.656 ± 0.0480.813 ± 0.0080.524 ± 0.0620.724 ± 0.013
VICReg0.853 ± 0.0390.947 ± 0.0020.855 ± 0.0080.976 ± 0.0060.877 ± 0.0080.937 ± 0.0040.830 ± 0.0230.839 ± 0.010
Barlow0.700 ± 0.0100.868 ± 0.0050.673 ± 0.0170.870 ± 0.0070.737 ± 0.0120.878 ± 0.0100.487 ± 0.0030.720 ± 0.009
Twins
Concerto0.106 ± 0.0000.431 ± 0.0000.113 ± 0.0000.419 ± 0.0000.105 ± 0.0000.435 ± 0.0000.112 ± 0.0000.406 ± 0.000
CLEAR0.950 ± 0.0000.961 ± 0.0010.898 ± 0.0000.967 ± 0.0020.891 ± 0.0000.941 ± 0.0070.987 ± 0.0000.994 ± 0.001
CLAIRE0.941 ± 0.0000.937 ± 0.0020.919 ± 0.0000.955 ± 0.0020.893 ± 0.0000.945 ± 0.0010.965 ± 0.0000.989 ± 0.002
scGPT (zero-shot)0.502 ± 0.000.765 ± 0.000.549 ± 0.020.826 ± 0.010.327 ± 0.150.519 ± 0.210.581 ± 0.000.786 ± 0.00
scGPT (finetuned)0.850 ± 0.0010.917 ± 0.0030.803 ± 0.020.955 ± 0.0100.466 ± 0.1540.560 ± 0.2170.989 ± 0.0060.995 ± 0.002
Geneformer (finetuned)0.622 ± 0.0000.916 ± 0.0000.630 ± 0.0000.944 ± 0.0000.673 ± 0.0000.891 ± 0.0000.564 ± 0.0000.993 ± 0.000
scBERT0.642 ± 0.0040.919 ± 0.0120.715 ± 0.0020.953 ± 0.0020.710 ± 0.08300.902 ± 0.0010.975 ± 0.0070.990 ± 0.000
scVI0.616 ± 0.0150.889 ± 0.0120.635 ± 0.1270.903 ± 0.0230.680 ± 0.1350.897 ± 0.0230.741 ± 0.1710.916 ± 0.023
SCDC0.571 ± 0.0040.860 ± 0.0210.581 ± 0.0010.875 ± 0.0150.689 ± 0.0030.908 ± 0.0020.442 ± 0.0020.800 ± 0.009
PCA0.071 ± 0.0000.082 ± 0.000.171 ± 0.0000.236 ± 0.0000.195 ± 0.0000.371 ± 0.0000.054 ± 0.0000.427 ± 0.000
+ +Table H4. Cell-type annotation for multi-modal datasets with CLEAR pipeline. On the left, two modalities (RNA + Protein or GEX (gene expression) + ADT (protein abundance)) were used during inference. On the right, we show inference performance with a single modality (RNA or GEX). All models were trained with two modalities. + +
MethodRNA + ProteinGEX + ADTRNAGEX
PBMC-MBMMCPBMC-MBMMC
Macro F1AccMacro F1AccMacro F1AccMacro F1Acc
SimCLR0.9500.9770.7700.8760.9060.9460.7490.848
± 0.002± 0.001± 0.028± 0.022± 0.002± 0.001± 0.050± 0.035
MoCo0.9300.9690.6090.7710.7780.8350.6300.717
± 0.007± 0.004± 0.001± 0.041± 0.016± 0.010± 0.065± 0.073
SimSiam0.9330.9680.6660.8200.8460.8840.6700.792
± 0.002± 0.001± 0.069± 0.036± 0.016± 0.019± 0.082± 0.056
NNCLR0.9410.9710.7340.8560.8600.9010.7030.806
± 0.004± 0.002± 0.055± 0.043± 0.008± 0.003± 0.085± 0.070
BYOL0.9330.9680.7370.8470.8570.8980.7040.795
± 0.008± 0.004± 0.051± 0.037± 0.047± 0.034± 0.048± 0.054
VICReg0.9500.9770.8080.8990.9230.9570.7850.887
± 0.001± 0.000± 0.021± 0.017± 0.005± 0.004± 0.014± 0.015
Barlow0.9490.9760.7660.8630.8900.9190.7330.816
Twins± 0.001± 0.001± 0.012± 0.003± 0.014± 0.008± 0.013± 0.017
Concerto0.8920.9470.6730.825
± 0.000± 0.001± 0.000± 0.000
scCLIP0.6990.8510.5570.7970.7280.8570.6350.818
± 0.009± 0.025± 0.015± 0.008± 0.019± 0.004± 0.071± 0.051
scButterfly0.9490.9760.8440.9240.9460.9730.8310.920
± 0.000± 0.000± 0.002± 0.000± 0.000± 0.000± 0.000± 0.001
scTEL0.1730.2110.0390.149
± 0.001± 0.031± 0.003± 0.004
totalVI0.8290.9770.8290.911
± 0.021± 0.0158± 0.023± 0.015
+ +Table H5. Missing modality prediction for methods trained with the CLEAR pipeline on multi-modal datasets. We show the average Pearson correlation between the original and inferred missing modality: protein for PBMC-M, and ADT (protein abundance) for BMMC. + +
MethodPBMC-M +Pearson MeanBMMC +Pearson Mean
SimCLR0.866 ± 0.0010.757 ± 0.002
MoCo0.856 ± 0.0010.721 ± 0.004
SimSiam0.859 ± 0.0020.748 ± 0.002
NNCLR0.861 ± 0.0020.751 ± 0.001
BYOL0.860 ± 0.0000.738 ± 0.002
VICReg0.865 ± 0.0010.759 ± 0.001
Barlow Twins0.864 ± 0.0010.755 ± 0.001
Concerto0.742 ± 0.0060.542 ± 0.001
scCLIP0.614 ± 0.0030.175 ± 0.005
scButterfly (kNN)0.856 ± 0.0000.651 ± 0.001
scButterfly (generated)0.840 ± 0.0000.624 ± 0.002
scTEL (100 epochs)0.022 ± 0.0050.047 ± 0.002
+ +Table H6. Batch correction results for methods with and without projection layer during inference. All methods are trained using the CLEAR pipeline. Results are not min-max scaled for easier comparison. + +
MethodEncoderEncoder + Projection
BioMCA BatchTotalBioHIC BatchTotalBioMCA BatchTotalBioHIC BatchTotal
SimCLR0.620 ± 0.0200.633 ± 0.0070.625 ± 0.0140.683 ± 0.0080.567 ± 0.0010.637 ± 0.0040.575 ± 0.0030.658 ± 0.0060.608 ± 0.0000.674 ± 0.0100.575 ± 0.0090.635 ± 0.002
MoCo0.561 ± 0.0130.706 ± 0.0020.619 ± 0.0070.721 ± 0.0000.585 ± 0.0030.667 ± 0.0010.536 ± 0.0000.726 ± 0.0150.612 ± 0.0060.705 ± 0.0190.602 ± 0.0010.664 ± 0.012
SimSiam0.519 ± 0.0080.656 ± 0.0250.574 ± 0.0060.623 ± 0.0400.545 ± 0.0120.592 ± 0.0290.435 ± 0.0090.681 ± 0.0270.533 ± 0.0060.598 ± 0.0380.540 ± 0.0170.575 ± 0.030
NNCLR0.570 ± 0.0120.646 ± 0.0030.600 ± 0.0080.680 ± 0.0230.550 ± 0.0030.628 ± 0.0130.493 ± 0.0010.675 ± 0.0130.566 ± 0.0040.657 ± 0.0260.557 ± 0.0050.617 ± 0.018
BYOL0.486 ± 0.0170.670 ± 0.0030.560 ± 0.0120.637 ± 0.0070.602 ± 0.0000.623 ± 0.0040.398 ± 0.0040.667 ± 0.0280.506 ± 0.0130.552 ± 0.0100.607 ± 0.0070.574 ± 0.009
VICReg0.624 ± 0.0050.653 ± 0.0090.636 ± 0.0060.716 ± 0.0070.584 ± 0.0020.663 ± 0.0030.587 ± 0.0020.699 ± 0.0090.632 ± 0.0050.726 ± 0.0050.588 ± 0.0050.671 ± 0.005
Barlow Twins0.606 ± 0.0090.623 ± 0.0130.613 ± 0.0000.716 ± 0.0060.566 ± 0.0060.656 ± 0.0010.544 ± 0.0180.664 ± 0.0040.592 ± 0.0120.675 ± 0.0290.562 ± 0.0060.630 ± 0.020
Concerto0.634 ± 0.0020.529 ± 0.0000.592 ± 0.0010.357 ± 0.0000.470 ± 0.0010.402 ± 0.001
CLEAR0.696 ± 0.0100.438 ± 0.0010.593 ± 0.0060.642 ± 0.0030.408 ± 0.0060.549 ± 0.001
CLAIRE0.689 ± 0.0180.763 ± 0.0080.718 ± 0.0140.699 ± 0.0290.700 ± 0.0030.699 ± 0.018
PCA0.651 ± 0.0010.348 ± 0.0000.530 ± 0.0000.654 ± 0.0080.320 ± 0.0000.521 ± 0.0050.651 ± 0.0010.348 ± 0.0000.530 ± 0.0000.654 ± 0.0080.302 ± 0.0000.521 ± 0.005
+ +Table H7. Data integration for methods using the CLEAR pipeline on multimodal datasets. We compare the effect of retaining the projection head during inference to the representation quality when using only the encoder. This table is not min-max scaled. + +
MethodEncoderEncoder + Projection
PBMC-MBMMCPBMC-MBMMC
BioBatchTotalBioBatchTotalBioBatchTotalBioBatchTotal
SimCLR0.738 ± 0.0000.518 ± 0.0100.650 ± 0.0040.718 ± 0.0160.574 ± 0.0010.660 ± 0.0100.741 ± 0.0210.513 ± 0.0050.650 ± 0.0150.712 ± 0.0040.567 ± 0.0010.654 ± 0.002
MoCo0.776 ± 0.0010.570 ± 0.0070.694 ± 0.0020.677 ± 0.0210.582 ± 0.0040.639 ± 0.0140.762 ± 0.0090.606 ± 0.0040.700 ± 0.0040.592 ± 0.0250.637 ± 0.0070.610 ± 0.012
SimSiam0.766 ± 0.0510.563 ± 0.0120.685 ± 0.0360.677 ± 0.0110.559 ± 0.0000.629 ± 0.0060.767 ± 0.0240.565 ± 0.0060.686 ± 0.0170.635 ± 0.0210.553 ± 0.0020.602 ± 0.014
NNCLR0.750 ± 0.0070.535 ± 0.0100.664 ± 0.0000.706 ± 0.0020.565 ± 0.0010.650 ± 0.0020.743 ± 0.0320.533 ± 0.0240.659 ± 0.0290.695 ± 0.0140.575 ± 0.0060.647 ± 0.011
BYOL0.789 ± 0.0150.557 ± 0.0000.696 ± 0.0090.701 ± 0.0050.574 ± 0.0070.651 ± 0.0060.783 ± 0.0070.543 ± 0.0090.687 ± 0.0080.691 ± 0.0060.584 ± 0.0020.648 ± 0.004
VICReg0.749 ± 0.0080.478 ± 0.0170.641 ± 0.0020.722 ± 0.0020.582 ± 0.0010.666 ± 0.0000.763 ± 0.0010.491 ± 0.0030.654 ± 0.0020.714 ± 0.0000.578 ± 0.0010.660 ± 0.001
Barlow Twins0.755 ± 0.0180.509 ± 0.0150.657 ± 0.0170.704 ± 0.0010.577 ± 0.0050.653 ± 0.0030.712 ± 0.0060.506 ± 0.0030.630 ± 0.0050.704 ± 0.0020.581 ± 0.0090.655 ± 0.002
Concerto0.773 ± 0.1170.436 ± 0.0060.638 ± 0.0720.604 ± 0.0890.525 ± 0.010.573 ± 0.054
PCA0.602 ± 0.0000.504 ± 0.0000.563 ± 0.0000.558 ± 0.0000.322 ± 0.0000.464 ± 0.0000.602 ± 0.0000.504 ± 0.0000.563 ± 0.0000.558 ± 0.0000.322 ± 0.0000.464 ± 0.000
+ +Table H8. Batch correction benchmark for methods trained using the CLEAR pipeline with domain specific batch normalization (DSBN). Results are not min-max scaled for easier comparison. + +
MethodBatch NormalizationDSBN
BioHIC BatchTotalBioMCA BatchTotalBioHIC BatchTotalBioMCA BatchTotal
SimCLR0.703 ± 0.0220.573 ± 0.0140.651 ± 0.0090.627 ± 0.0090.644 ± 0.0260.634 ± 0.0140.680 ± 0.0200.583 ± 0.0150.641 ± 0.0080.624 ± 0.0080.636 ± 0.0090.629 ± 0.006
MoCo0.707 ± 0.0100.582 ± 0.0200.657 ± 0.0110.518 ± 0.0600.731 ± 0.0320.603 ± 0.0470.648 ± 0.0400.612 ± 0.0040.633 ± 0.0220.549 ± 0.0130.697 ± 0.0030.608 ± 0.008
SimSiam0.619 ± 0.0530.544 ± 0.0430.589 ± 0.0490.523 ± 0.0690.668 ± 0.0720.581 ± 0.0690.603 ± 0.0740.595 ± 0.0250.600 ± 0.0340.502 ± 0.0180.635 ± 0.0130.555 ± 0.015
NNCLR0.658 ± 0.0150.546 ± 0.0110.613 ± 0.0100.574 ± 0.1200.637 ± 0.0560.599 ± 0.0870.659 ± 0.0110.590 ± 0.0060.632 ± 0.0090.543 ± 0.0190.651 ± 0.0010.587 ± 0.011
BYOL0.607 ± 0.0240.624 ± 0.0160.614 ± 0.0200.483 ± 0.0050.679 ± 0.0420.561 ± 0.0190.576 ± 0.0500.600 ± 0.0180.586 ± 0.0230.473 ± 0.0210.673 ± 0.0140.553 ± 0.017
VICReg0.706 ± 0.0340.592 ± 0.0190.661 ± 0.0170.615 ± 0.0160.665 ± 0.0200.635 ± 0.0170.674 ± 0.0560.591 ± 0.0120.641 ± 0.0330.619 ± 0.0040.649 ± 0.0020.631 ± 0.003
Barlow Twins0.713 ± 0.0140.572 ± 0.0080.656 ± 0.0050.603 ± 0.0100.636 ± 0.0640.617 ± 0.0270.707 ± 0.0040.577 ± 0.0030.655 ± 0.0020.603 ± 0.0200.634 ± 0.0110.615 ± 0.008
Concerto0.357 ± 0.0000.470 ± 0.0290.402 ± 0.0120.635 ± 0.0160.529 ± 0.0280.593 ± 0.021
PCA0.656 ± 0.0090.320 ± 0.0050.522 ± 0.0050.651 ± 0.0190.348 ± 0.0000.530 ± 0.0120.656 ± 0.0000.320 ± 0.0020.522 ± 0.0010.651 ± 0.0000.348 ± 0.0010.530 ± 0.000
+ +Table H9. Comparison of different multi-omics integration methods using the CLEAR pipeline. Data integration metrics were computed for the BMMC dataset. + +
MethodAddConcatCLIP + Concat
BioBatchTotalBioBatchTotalBioBatchTotal
SimCLR0.827 ± 0.0780.3 ± 0.0570.616 ± 0.050.84 ± 0.0930.273 ± 0.0580.613 ± 0.0650.511 ± 0.2230.504 ± 0.0940.508 ± 0.166
MoCo0.935 ± 0.070.407 ± 0.0190.724 ± 0.0450.056 ± 0.0650.8 ± 0.0000.354 ± 0.0390.566 ± 0.1570.464 ± 0.1610.525 ± 0.049
SimSiam0.453 ± 0.1750.174 ± 0.0250.341 ± 0.1070.506 ± 0.1460.21 ± 0.0410.387 ± 0.0830.197 ± 0.1770.364 ± 0.0510.264 ± 0.011
NNCLR0.679 ± 0.1710.225 ± 0.0410.498 ± 0.1130.768 ± 0.1050.231 ± 0.0340.553 ± 0.060.584 ± 0.0880.5 ± 0.0790.551 ± 0.066
BYOL0.117 ± 0.1090.8 ± 0.0000.39 ± 0.0660.527 ± 0.0290.673 ± 0.0740.586 ± 0.030.46 ± 0.1230.403 ± 0.1520.437 ± 0.115
VICReg0.791 ± 0.0890.449 ± 0.0140.654 ± 0.0520.887 ± 0.0350.484 ± 0.0090.726 ± 0.0220.72 ± 0.0790.38 ± 0.0550.584 ± 0.056
Barlow Twins0.717 ± 0.0550.262 ± 0.0260.535 ± 0.0390.852 ± 0.0960.27 ± 0.0120.62 ± 0.0590.706 ± 0.1150.352 ± 0.0850.565 ± 0.055
+ +Table H10. Augmentation Parameters for the CLEAR (Han et al., 2022) augmentations on the left. Results for the ablation of all augmentations on the right, including the CLAIRE (Yan et al., 2023) augmentation denoted as MNN, and our BBKNN augmentation. Results stem from our ablation detailed in Appendix C. + +
AugmentationCLEARAblation Result
ασknnασknn
Masking0.20.5
Gaussian Noise0.80.20.30.2
InnerSwap0.10.3
CrossOver0.250.1
BBKNN0.93
MNN0.53
+ +Table H11. Batch correction benchmark for the Tabula Sapiens dataset (1.1 million cells) trained using the CLEAR pipeline (only 1 run) to show the scalability of our benchmark. The previously drawn conclusion that baselines outperform SSL methods in uni-modal batch correction holds. Generic SSL methods are good at batch correction but not preservation of the true biological variance. + +
MethodTabula Sapiens
BioBatchTotal
SimCLR0.3740.5670.451
MoCo0.3420.5410.421
SimSiam0.2370.4300.314
NNCLR0.3120.4540.368
BYOL0.0850.8000.371
VICReg0.3270.6570.459
BarlowTwins0.3760.4270.396
scVI0.7230.2540.536
PCA0.5380.2970.442
\ No newline at end of file diff --git a/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/images.zip b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4d6e8cabddfde3989d9c1057eb7d2a4d305b109a --- /dev/null +++ b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1489357b8ec48cee38fbe97e93c4c995450d89971e1394ca5b7432ef824f8d37 +size 2396394 diff --git a/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/layout.json b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e21a921e606eade08bd736d474a80ec92d1afb96 --- /dev/null +++ b/ICML/2025/scSSL-Bench_ Benchmarking Self-Supervised Learning for Single-Cell Data/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dad7595de871bc033d40b8430537a45f3f378cb1beb6989f6d81f8728d753587 +size 595190 diff --git a/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_content_list.json b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f396103c7b9fe86e2fad99e4400719c16454bdd2 --- /dev/null +++ b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57e2024e4d7ac30a7fff9579bdbd3b7beab96f0568c727efc116157f5f0bd80b +size 125181 diff --git a/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_model.json b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9fb77eda71ca09ae1f9e08ba445eee0080da5699 --- /dev/null +++ b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bd2178cad0292dfdb19d8cb1ff9c4576ee5d82bb7bcf0ae2a1569d910d869b9 +size 147072 diff --git a/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_origin.pdf b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c7b62845c5054efd70f62f803e5bd9d9a9fd59ce --- /dev/null +++ b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/48fbbc80-0e7b-4115-954d-1fc9c1c533d4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b62009b56373b68993bc400a54a37c562aed52023d7ab98803189aa1313a3bf6 +size 8297401 diff --git a/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/full.md b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6ca1ab3974dc7b81b2c36e9fc620c76f6c4e163d --- /dev/null +++ b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/full.md @@ -0,0 +1,492 @@ +# sciLaMA: A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models + +Hongru Hu $^{12}$ Shuwen Zhang $^{3}$ Yongin Choi $^{12}$ Venkat S. Malladi $^{4}$ Gerald Quon $^{12}$ + +# Abstract + +Single-cell RNA sequencing (scRNA-seq) enables high-resolution exploration of cellular diversity and gene regulation, yet analyzing such data remains challenging due to technical and methodological limitations. Existing task-specific deep generative models like Variational AutoEncoder (VAE) and its variants struggle to incorporate external biological knowledge, while transformer-based foundational large Language Models (LLMs or large LaMs) face limitations in computational cost and applicability to tabular gene expression data. Here, we introduce sciLama (single-cell interpretable Language Model Adapter), a novel representation learning framework that bridges these gaps by integrating static gene embeddings from multimodal LLMs with scRNA-seq tabular data through a paired-VAE architecture. Our approach generates context-aware representations for both cells and genes and outperforms state-of-the-art methods in key single-cell downstream tasks, including batch effect correction, cell clustering, and cell-state-specific gene marker and module identification, while maintaining computational efficiency. sciLama offers a computationally efficient, unified framework for comprehensive single-cell data analysis and biologically interpretable gene module discovery. Source code is available at https://github.com/microsoft/sciLaMA + +*Equal contribution $^{1}$ Department of Molecular and Cellular Biology, University of California, Davis, CA USA $^{2}$ Genome Center, University of California, Davis, CA USA $^{3}$ Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN USA $^{4}$ Health Futures, Microsoft Research, Redmond, WA USA. Correspondence to: Hongru Hu , Venkat S. Malladi , Gerald Quon . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +# 1. Introduction + +Single-cell RNA sequencing (scRNA-seq) has revolutionized studies of cellular heterogeneity and transcriptome dynamics by providing gene expression profiles at single-cell resolution. Deep generative models, particularly Variational Autoencoders (VAE) (Kingma & Welling, 2014) and its variants, have become widely used for analyzing scRNA-seq data, which enable dimensionality reduction and representation learning by projecting cells from high-dimensional gene spaces to lower-dimensional embedding spaces (Gronbech et al., 2020; Lopez et al., 2018). These cell embeddings facilitate downstream tasks such as cell clustering, trajectory inference, and differential expression analysis (Chen et al., 2021; Kana et al., 2023; Yan et al., 2023). VAE's nonlinear representation capabilities allow them to effectively model complex cellular landscapes, making them well-suited for tabular data like gene expression matrices. However, scRNA-seq analysis remains challenging due to technical noise, sparse measurements, and batch effects, which often obscure true biological signals (Lähnemann et al., 2020). Incorporating external prior knowledge of genes, such as their functional annotations or molecular sequence data, has the potential to mitigate these challenges. However, the representation of input gene expression data as fixed-length vectors in traditional VAEs such as scVI-tools (Lopez et al., 2018) is not directly compatible with the different representations of prior gene knowledge, such as variable-length molecular sequences or text descriptions. This prevents prior gene knowledge from being directly incorporated into traditional VAE architectures. + +Large Language Models (LLMs), on the other hand, have emerged as powerful tools for learning gene representations from various sources, including literature-based textual descriptions (Chen & Zou, 2024; Liu et al., 2023), molecular sequences (Elnaggar et al., 2022; Lin et al., 2023), and large-scale expression atlases (Cui et al., 2024; Theodoris et al., 2023). These models encode sequential data through tokenization and transformer architectures to create static gene embeddings that capture rich biological information. However, LLMs also face challenges: they are computationally expensive to train and inherently less suited for processing tabular data such as gene expression matrices, where VAEs + +demonstrate superior performance (Kedzierska et al., 2023). + +To bridge the complementary strengths of VAEs and LLMs, we propose sciLaMA (single cell interpretable Language Model Adapter), a novel representation learning framework that extends the siVAE (Choi et al., 2023) architecture to integrate precomputed static gene embeddings from pretrained multimodal LLMs with scRNA-seq tabular data. By combining the representation power of VAEs with the adaptable and knowledge-rich embeddings of LLMs, our approach projects static gene information into context-aware representations by aligning each dimension of gene and cell latent space within the unified paired-VAE framework (Section 3). This approach presents a unified framework that improves over state of the art methods in single-cell analysis in three tasks: (1) cell representation learning and batch effect correction, (2) gene expression imputation, and (3) discovery of biologically meaningful gene modules and cell-state-specific regulatory dynamics, all while maintaining computational efficiency. + +Contributions: (1) We introduce a novel framework that incorporates external gene knowledge from pretrained LLMs with scRNA-seq data, facilitating context-aware cell and gene representation learning. (2) We demonstrate that our approach reduces computational requirements while improving performance compared to existing state-of-the-art methods across various single-cell tasks. + +# 2. Related work + +Deep generative approaches for single cell analysis. Deep generative models, particularly those based on variational autoencoders (VAEs), have advanced single-cell RNA sequencing (scRNA-seq) analysis. Methods such as scVI tools learn low-dimensional cell embeddings for cell-centric tasks such as visualization, clustering, and batch correction. Researchers have also further utilized feature attribution techniques to identify important genes in specific cell populations and infer gene modules (Janizek et al., 2023) by leveraging the learned cell embeddings. However, these approaches primarily focus on cell representations without inferring gene representations, and pipelines leveraging other tools are needed for gene-centric analyses such as marker identification and gene network discovery. To address this limitation, siVAE (Choi et al., 2023) introduced a unified framework for learning both cell and gene representations, enabling direct gene-centric analyses using the gene representations and therefore eliminating the need for explicit gene module inference via external tools. However, siVAE gene representation learning involves training an encoder whose number of input nodes scales with the number of cells, thus limiting its applications to large datasets. Moreover, scVI, siVAE, and most other VAE methods do not integrate external knowledge into scRNA-seq analysis due + +to the representational challenges discussed above. Methods such as GLUE (Cao & Gao, 2022) incorporate external information about regulatory interactions among features in the form of feature variables, however, such approaches struggle to utilize information such as molecular sequences or natural language descriptions of genes. + +Single-cell foundation language models. Transformer-based large language models (LLMs) have recently been applied for single-cell data analysis. Unlike VAE-based methods, which treat scRNA-seq data as a cell-by-gene matrix, models such as scGPT (Cui et al., 2024) represent expression profiles as sequences of tokens, drawing similarities to natural language and demonstrating a novel way to represent single-cell data. However, despite their promise, single-cell LLMs exhibit certain limitations. Their performance in zero-shot scenarios is often unreliable, and finetuning them requires extensive computational resources and technical expertise compared to task-specific small models (Kedzierska et al., 2023). These drawbacks emphasize the need for approaches that are computationally efficient and capable of bridging foundational knowledge with real-world single-cell tasks. + +Applications of static gene embeddings in single-cell analysis. Gene embeddings derived from non-single-cell biological data modalities can complement information derived from single-cell data analysis. For instance, precomputed gene embeddings from protein language models (PLMs), such as ESM and ProtTrans (Elnaggar et al., 2022; Lin et al., 2023), capture gene molecular properties and have been applied in frameworks like SATURN (Rosen et al., 2024) to identify conserved master regulatory genes across species. Similarly, models such as GenePT (Chen & Zou, 2024) and scELMo (Liu et al., 2023) use embeddings derived from textual descriptions of gene functions and biological pathways via natural language models such as OpenAI text-embedding model (OpenAI, 2022). These applications demonstrate the feasibility of incorporating external static gene embeddings from various modalities into single-cell analysis frameworks. By integrating such embeddings, researchers are able to improve the robustness of single-cell analysis, facilitate gene module characterization, and uncover regulatory dynamics. + +# 3. Methods + +Conceptually, sciLaMA is an adapter framework that integrates pretrained gene embeddings from LLMs of different modalities, and tailors them for downstream single-cell analyses. Instead of learning gene representations de novo, sciLaMA adapts and contextualizes these precomputed static gene embeddings by incorporating context specific cell-level data (cell-by-gene expression matrix). In this section, we detail the technical components of the sciLaMA framework + +and its application to single-cell analysis. + +# 3.1. Input data processing and notation + +The sciLaMA framework requires two inputs: (1) A set of gene expression inputs $\{c_i\}_{i = 1}^N$ , representing scRNA-seq data for $N$ cells (scaled log-normalized expression) drawn from a specific cell population. Each of the $N$ cell vectors $c_{i}$ has $M$ measurements corresponding to individual genes. (2) Static gene embeddings $\{g_j\}_{j = 1}^M$ , derived from a single pretrained language model (LaM). These embeddings provide $D$ -dimensional representations of $M$ genes, capturing their properties derived from external prior knowledge, where the number $D$ depends on the embedding dimensionality of the specific LLM. + +# 3.2. sciLaMA architecture + +sciLaMA is based on a paired encoder-decoder design, inspired by siVAE (Choi et al., 2023), a interpretable deep generative model that jointly learns sample (cell) and feature (gene) embeddings using a paired VAE design. siVAE only uses scRNA-seq data to learn both sets of embeddings, whereas sciLaMA uses external data to inform gene embeddings. sciLaMA consists of two encoder-decoders: one for cells and one for genes (Figure 1a). + +# 3.2.1. CELL ENCODER AND DECODER + +The cell encoder $f_{\phi^{\mathrm{cell}}}^{\mathrm{cell}}(\cdot)$ projects each cell $i$ 's expression profile $c_i$ , represented as an $M$ -dimensional gene expression vector, to parameters of a $K$ -dimensional variational posterior distribution with mean $\pmb{\mu}_i^{\mathrm{cell}} \in \mathbb{R}^k$ and variance $(\pmb{\sigma}_i^{\mathrm{cell}})^2 \in \mathbb{R}^k$ . A latent embedding $z_i^{\mathrm{cell}}$ is sampled via the reparameterization trick: + +$$ +\begin{array}{l} \left(\boldsymbol {\mu} _ {i} ^ {\text {c e l l}}, \boldsymbol {\sigma} _ {i} ^ {\text {c e l l}}\right) \leftarrow f _ {\phi^ {\text {c e l l}}} ^ {\text {c e l l}} (\boldsymbol {c} _ {i}) \\ \boldsymbol {z} _ {i} ^ {\text {c e l l}} = \boldsymbol {\mu} _ {i} ^ {\text {c e l l}} + \epsilon \odot \exp \left(0. 5 \cdot \log \left(\sigma_ {i} ^ {\text {c e l l}}\right) ^ {2}\right) (1) \\ \boldsymbol {h} _ {i} ^ {\text {c e l l}} = g _ {\psi^ {\text {c e l l}}} ^ {\text {c e l l}} \left(\boldsymbol {z} _ {i} ^ {\text {c e l l}}\right) (2) \\ \end{array} +$$ + +where $\epsilon \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ and $\odot$ denotes element-wise multiplication. $g_{\psi^{\mathrm{cell}}}^{\mathrm{cell}}(\cdot)$ represents the cell decoder without a conventional final linear transformation layer, and outputs $h_i^{\mathrm{cell}}\in \mathbb{R}^l$ for cell $i$ . + +# 3.2.2. GENE ENCODER AND DECODER + +Similarly, the gene encoder $f_{\phi^{\mathrm{gene}}}^{\mathrm{gene}}(\cdot)$ maps each gene $j$ 's external static embedding $g_j \in \mathbb{R}^D$ , derived from a pretrained LLM, into the contextual embedding space by predicting the parameters of a $K$ -dimensional variational posterior distribution with mean $\pmb{\mu}_j^{\mathrm{gene}} \in \mathbb{R}^k$ and variance $(\pmb{\sigma}_j^{\mathrm{gene}})^2 \in \mathbb{R}^k$ . The gene-level decoder $g_{\psi^{\mathrm{gene}}}^{\mathrm{gene}}(\cdot)$ is then used to produce + +output $h_j^{\mathrm{gene}}$ .. + +$$ +\begin{array}{l} \left(\boldsymbol {\mu} _ {j} ^ {\text {g e n e}}, \boldsymbol {\sigma} _ {j} ^ {\text {g e n e}}\right) \leftarrow f _ {\phi^ {\text {g e n e}}} ^ {\text {g e n e}} (\boldsymbol {g} _ {j}) \\ \boldsymbol {z} _ {j} ^ {\text {g e n e}} = \boldsymbol {\mu} _ {j} ^ {\text {g e n e}} + \epsilon \odot \exp \left(0. 5 \cdot \log \left(\sigma_ {j} ^ {\text {g e n e}}\right) ^ {2}\right) (3) \\ \boldsymbol {h} _ {j} ^ {\text {g e n e}} = g _ {\psi^ {\text {g e n e}}} ^ {\text {g e n e}} \left(\boldsymbol {z} _ {j} ^ {\text {g e n e}}\right) (4) \\ \end{array} +$$ + +# 3.2.3. SCILAMA RECONSTRUCTION OUTPUT + +Similar to the siVAE framework (Choi et al., 2023), the output of sciLaMA is the reconstruction of the single cell expression data for gene $j$ in cell $i$ , denoted as $\hat{c}_{i,j}$ , via combining the respective cell and gene decoder outputs $h_i^{\mathrm{cell}}$ and $h_j^{\mathrm{gene}}$ : + +$$ +\hat {c} _ {i, j} = \left(\boldsymbol {h} _ {i} ^ {\text {c e l l}}\right) ^ {T} \times \boldsymbol {h} _ {j} ^ {\text {g e n e}} + b _ {j} \tag {5} +$$ + +# 3.3. Optimization + +The optimization of the sciLaMA framework involves a stepwise training procedure designed for representation learning of both cells and genes (Appendix B), and the training objectives follow the evidence lower bound (ELBO) framework, combining reconstruction accuracy and regularization via Kullback-Leibler (KL) divergence. + +Step 1: Pretraining the Cell Encoder and Decoder: We first pretrain the weights of the cell encoder and decoder $(\phi^{\mathrm{cell}}$ and $\psi^{\mathrm{cell}}$ , respectively) by treating the encoder-decoder as a VAE, where the objective function focuses on matching cell decoder outputs $h_i^{\mathrm{cell}}$ to the original expression vectors $c_{i}$ via a linear transformation with parameters $W^{\mathrm{cell}} \in \mathbb{R}^{l \times M}$ and $\pmb{b}$ . The loss function $\mathcal{L}_{\mathrm{cell}}$ for this step is defined as: + +$$ +\hat {\boldsymbol {c}} _ {i} ^ {\text {c e l l r e c o n}} = \left(\boldsymbol {h} _ {i} ^ {\text {c e l l}}\right) ^ {T} \times \boldsymbol {W} ^ {\text {c e l l}} + \boldsymbol {b} \tag {6} +$$ + +$$ +\mathcal {L} _ {i} ^ {\text {c e l l r e c o n}} = \left(\boldsymbol {c} _ {i} - \hat {\boldsymbol {c}} _ {i} ^ {\text {c e l l r e c o n}}\right) ^ {T} \left(\boldsymbol {c} _ {i} - \hat {\boldsymbol {c}} _ {i} ^ {\text {c e l l r e c o n}}\right) \tag {7} +$$ + +$$ +\mathcal {L} _ {\text {c e l l}} = \sum_ {i} \mathcal {L} _ {i} ^ {\text {c e l l r e c o n}} + \beta \cdot K L \left(\mathcal {N} \left(\boldsymbol {z} _ {i} ^ {\text {c e l l}} \mid \boldsymbol {\mu} _ {i} ^ {\text {c e l l}}, \boldsymbol {\sigma} _ {i} ^ {\text {c e l l}}\right) \| \mathcal {N} (\boldsymbol {0}, \boldsymbol {I})\right) \tag {8} +$$ + +where $\beta$ represents the weight of the KL divergence term in VAEs, and is tuned to prioritize accurate reconstruction during the early stages of training. + +Step 2: Pretraining the Gene Encoder and Decoder: Once the cell encoder and decoder are pretrained, its parameters $(\phi^{\mathrm{cell}},\psi^{\mathrm{cell}},W^{\mathrm{cell}}$ ,and $b)$ are frozen, and we then pretrain the parameters $(\phi^{\mathrm{gene}},\psi^{\mathrm{gene}})$ of the gene encoder $f_{\phi^{\mathrm{gene}}}^{\mathrm{gene}}(\cdot)$ and decoder $g_{\psi^{\mathrm{gene}}}^{\mathrm{gene}}(\cdot)$ , respectively. The loss function $\mathcal{L}_{\mathrm{gene}}$ for this step is defined as: + +$$ +\mathcal {L} _ {i} ^ {\text {r e c o n}} = \left(\boldsymbol {c} _ {i} - \hat {\boldsymbol {c}} _ {i}\right) ^ {T} \left(\boldsymbol {c} _ {i} - \hat {\boldsymbol {c}} _ {i}\right) \tag {9} +$$ + +$$ +\mathcal {L} _ {\text {g e n e}} = \sum_ {i} \mathcal {L} _ {i} ^ {\text {r e c o n}} + \beta \cdot K L \left(\mathcal {N} \left(z _ {j} ^ {\text {g e n e}} \mid \boldsymbol {\mu} _ {j} ^ {\text {g e n e}}, \sigma_ {j} ^ {\text {g e n e}}\right) \| \mathcal {N} (\boldsymbol {0}, I)\right) \tag {10} +$$ + +![](images/cd2742fe0a2a480bcbb99302c62723d54ca8c76748a4fe1321e334f7c78aa1f7.jpg) +Figure 1. sciLaMA overview. (a) Diagram of the sciLaMA framework, which utilizes static gene embeddings generated from multimodal language models and employs paired encoder-decoders for both genes and cells. (b) Visualizations of cell and gene latent and last-hidden spaces and their operations for different components of the loss functions. (c) Illustrations of downstream applications using sciLaMA. + +![](images/9275a63fa0370beb62779c2c434303a15f20b7a2b4b1d87005ebdc494f2e0ee8.jpg) + +![](images/befca93e067c19c4d54d6613e86f27dde6796f9e6ca56fa10eb83873349078fd.jpg) + +Note that unlike the reconstruction term $\mathcal{L}_i^{\mathrm{cellrecon}}$ from previous step (Equation (7)), this loss function operates on the outputs of the last hidden layers of both cell and gene decoders (Equation (5)). Because the inputs to the gene encoder are the prior LLM-defined gene embeddings $g_{j}$ and the output is reconstruction of the gene expression measurements $c_{i}$ , this pretraining serves to help adapt the LLM embeddings to the current (gene expression) context. + +Step 3: Joint Optimization of sciLaMA: In the final step, all parameters of the sciLaMA framework are optimized to improve the reconstruction quality of the expression matrix. The loss function $\mathcal{L}_{\mathrm{sciLaMA}}$ for this step is: + +$$ +\hat {\boldsymbol {c}} _ {i, j} ^ {\text {a l i g n m e n t}} = \left(\boldsymbol {z} _ {i} ^ {\text {c e l l}}\right) ^ {T} \times \boldsymbol {z} _ {j} ^ {\text {g e n e}} + b _ {j} \tag {11} +$$ + +$$ +\mathcal {L} _ {i} ^ {\text {a l i g n m e n t}} = \left(\boldsymbol {c} _ {i} - \hat {\boldsymbol {c}} _ {i} ^ {\text {a l i g n m e n t}}\right) ^ {T} \left(\boldsymbol {c} _ {i} - \hat {\boldsymbol {c}} _ {i} ^ {\text {a l i g n m e n t}}\right) \tag {12} +$$ + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {s c i L a M A}} = \sum_ {i} \mathcal {L} _ {i} ^ {\text {r e c o n}} + \gamma \cdot \mathcal {L} _ {i} ^ {\text {a l i g n m e n t}} \\ + \beta \cdot K L \left(\mathcal {N} \left(\boldsymbol {z} _ {i} ^ {\text {c e l l}} \mid \boldsymbol {\mu} _ {i} ^ {\text {c e l l}}, \boldsymbol {\sigma} _ {i} ^ {\text {c e l l}}\right) \| \mathcal {N} (\boldsymbol {0}, \boldsymbol {I})\right) \\ + \beta \cdot K L \left(\mathcal {N} \left(\boldsymbol {z} _ {j} ^ {\text {g e n e}} \mid \boldsymbol {\mu} _ {j} ^ {\text {g e n e}}, \boldsymbol {\sigma} _ {j} ^ {\text {g e n e}}\right) \| \mathcal {N} (\boldsymbol {0}, \boldsymbol {I})\right) \tag {13} \\ \end{array} +$$ + +where $\mathcal{L}_i^{\mathrm{alignment}}$ is a reconstruction-based regularization term that encourages alignment between the latent spaces of cells ( $z_i^{\mathrm{cell}}$ ) and genes ( $z_j^{\mathrm{gene}}$ ) by enforcing that the linear product of the embeddings approximates the original expression value of gene $j$ in cell $i$ ( $c_{i,j}$ ). This term, inspired by siVAE, serves as the interpretability term, ensuring that individual dimensions of the cell and gene embeddings ( $z^{\mathrm{cell}}$ and $z^{\mathrm{gene}}$ ) correspond meaningfully to each other. $\gamma$ is a scalar weight (default = 0.05) that determines the influence of $\mathcal{L}_i^{\mathrm{alignment}}$ term on the overall loss function. A small value prevents it from dominating the optimization process. + +# 3.4. Inference and Embedding Extraction + +After training the sciLaMA framework, the learned cell and gene embeddings can be extracted for downstream analyses. Given the trained encoders $f_{\phi^{\mathrm{cell}}}^{\mathrm{cell}}(\cdot)$ and $f_{\phi^{\mathrm{gene}}}^{\mathrm{gene}}(\cdot)$ , they can be used to project a cell expression profile $c^{(1)}$ or gene embedding $g^{(2)}$ into the cell $(z^{(1)})$ or gene $(z^{(2)})$ latent space for downstream visualization or analysis. + +$$ +\left(\boldsymbol {\mu} ^ {(1)}, \boldsymbol {\sigma} ^ {(1)}\right) \leftarrow f _ {\hat {\phi} ^ {\mathrm {c e l l}}} ^ {\mathrm {c e l l}} \left(\boldsymbol {c} ^ {(1)}\right) \tag {14} +$$ + +$$ +\boldsymbol {z} ^ {(1)} \sim \mathcal {N} (\boldsymbol {\mu} ^ {(1)}, \boldsymbol {\sigma} ^ {(1)}) \tag {15} +$$ + +$$ +\left(\boldsymbol {\mu} ^ {(2)}, \boldsymbol {\sigma} ^ {(2)}\right) \leftarrow f _ {\hat {\phi} ^ {\text {g e n e}}} ^ {\text {g e n e}} \left(\boldsymbol {g} ^ {(2)}\right) \tag {16} +$$ + +$$ +\boldsymbol {z} ^ {(2)} \sim \mathcal {N} (\boldsymbol {\mu} ^ {(2)}, \boldsymbol {\sigma} ^ {(2)}) \tag {17} +$$ + +# 4. Experiments + +The experiments evaluating sciLaMA are designed to assess its performance in single-cell analysis at both cell- and gene-level tasks. For cell-level tasks, sciLaMA is assessed by evaluating its capacity to generate cell embeddings that simultaneously preserve biological signals and remove batch effects, with performance measured by (1) cell clustering annotation accuracy, (2) cell type separation precision, and (3) the effectiveness of batch mixing. For gene-level tasks, sciLaMA is evaluated on its ability to impute gene expression, identify gene markers, infer developmental trajectories and discover temporal dynamic gene modules (Figure 1c). Detailed methodologies are listed in the Appendices C and D. + +# 4.1. Prior Knowledge Improves Cell Representation Learning + +We first evaluated cell-level tasks because gene-level analysis tasks are largely cell state-specific, and therefore rely on + +![](images/3d672d580555059636c0b1cc446f0f03ee866b97babb2f25e9c5bc39741708a2.jpg) + +![](images/8c628162c6bb9775efd278f5940fb237452f15caf706e1f3f5bf19395ea8c307.jpg) +Figure 2. Robust cell representation learning and integration with sciLaMA. (a) Quantitative performance comparison of models based on sciLaMA against other methods in preserving biological variance (blue and orange metrics) and removing batch effects (green metrics). (b-c) Scatter plot directly comparing sciLaMA-GPT (y-axis) to sciLaMA (s.i.) (x-axis, b) and fine-tuned scGPT (x-axis, c). (d) UMAP (McInnes et al., 2020) visualizations of cell embeddings with colors indicating cell types (top) and batch origins (bottom). + +cell-level tasks such as accurate cell clustering and robust cell representations. To evaluate sciLaMA's performance and assess the impact of incorporating prior knowledge encoded as gene embeddings on cell-level tasks (Section 4), we benchmarked sciLaMA against the state-of-the-art (SOTA) model scVI (Lopez et al., 2018) and foundation models such as scGPT, CellPLM, and GenePT (Chen & Zou, 2024; Cui et al., 2024; Wen et al., 2023). Multiple variants of sciLaMA were created, each using a different set of gene embeddings precomputed using different prior knowledge databases to determine which prior knowledge is most relevant to single cell tasks: sciLaMA-GenePT, sciLaMA-ProtTrans, sciLaMA-CellPLM, sciLaMA-ChatGPT, and sciLaMA-ESM. To determine the extent to which the sciLaMA framework itself is superior to other models, we created the "self-informed" version of sciLaMA, sciLaMA (s.i.), to represent the framework when learning gene embeddings from the transposed single cell expression data itself solely (without prior LLM-derived knowledge). Celllevel tasks were evaluated using five pancreatic scRNA-seq datasets from different labs and sequencing platforms (Tran et al., 2020). + +Across multiple standard integration metrics (Luecken et al., 2022), all sciLaMA variants robustly outperformed other models both individually (Figure 2a,d, Appendix E) as well as on average (Table 1), suggesting that the sciLaMA frame + +Table 1. Cell representation learning and integration performance on human pancreatic datasets. Adjusted Rand Index (ARI) and Normalized Mutual Information (NMI) for cluster annotation accuracy; Average Silhouette Width (ASW) for cell type separation; batchASW and graph integration local inverse Simpson's Index (iLISI) for batch mixing quality. + +
MethodsARI ↑NMI ↑ASW ↑batchASW ↑iLISI ↑
sciLaMA (avg.)0.5220.7450.5350.8650.238
sciLaMA (s.i.)0.4360.6980.5390.8320.210
scGPT fine-tuned0.4830.7040.6500.7360.074
scVI-batch0.4470.7180.4990.7440.115
scVI-raw0.2970.5700.4530.6210.030
scGPT zero-shot0.3210.4870.4420.5880.005
CellPLM zero-shot0.3300.5160.4210.4921.11e-16
GenePT-w0.0220.0790.1920.5530.121
+ +work is a general, powerful framework for tackling cell-level tasks. For cell type clustering and annotation, sciLaMA achieved an average adjusted Rand index (ARI) of 0.522 and normalized mutual information (NMI) of 0.745, outperforming scVI (with batch variable consideration) by $16.78\%$ and $3.76\%$ , respectively, and fine-tuned scGPT by $8.07\%$ and $5.82\%$ . Additionally, its ARI and NMI values were approximately 1.5 times higher than those of the best zero-shot foundation models, showcasing its ability to generate well-defined cell clusters aligned with cell type annotations from the original studies. In cell type separation, sciLaMA achieved an average silhouette width (ASW) of 0.535 and a graph cell type local inverse Simpson's index (cLISI) of 0.9935, indicating precise separation of cell types with preserved biological variation. Furthermore, for batch effect correction, sciLaMA achieved the highest batch-ASW of 0.865 and a graph integration-LISI (iLISI) of 0.238, surpassing the next-best models by $16.26\%$ and $96.69\%$ , respectively. These results collectively highlight sciLaMA's robust ability to integrate cells across batches while maintaining accurate cell type representations and biological relevance. + +Interestingly, the performance of sciLaMA (s.i.) without any external prior knowledge from LLMs is worse than all variants of sciLaMA with prior knowledge despite the diversity of prior knowledgebases, suggesting that incorporating prior knowledge of gene function is broadly acting to regularize sciLaMA and prevent overfitting (Figure 2b). These results are consistent with the observation across all tasks. sciLaMA outperformed scVI, another SOTA VAE-based model without external knowledge, again supporting that incorporating prior gene knowledge is beneficial to single cell analysis. + +While our experiment above confirmed incorporating prior knowledge is helpful for single cell analysis, we also wondered whether with a framework inspired by paired VAEs, is sciLaMA the best framework for integrating prior knowledge? To explore this, we directly compared the transformer-based foundation model scGPT that was subsequently fine + +Table 2. Comparison of runtime (in seconds) for modeling 14,767 human pancreatic cells sourced from five different origins on a single NVIDIA A100 80GB GPU. Due to memory limitations, the batch size for scGPT was set to 10, while siVAE and the various sciLaMA configurations utilized a batch size of 128. + +
MethodscGPT fine-tunesiVAEsciLaMA (avg.)
Runtime (s) ↓19,4742,265759
+ +tuned on our training single cell data (scGPT-finetuned) with sciLaMA-scGPT (sciLaMA using pretrained scGPT gene embeddings). Both models are based on the same pretrained scGPT-whole-human as prior knowledge, but differ in how the pretrained embeddings are updated further. sciLaMA-scGPT outperformed fine-tuned scGPT by $6.82\%$ in cell type clustering and annotation task (ARI and NMI) (Figure 2c). Although the fine-tuned scGPT achieved marginally better results in silhouette width (ASW), its lower batch-ASW and integration-LISI (iLISI) scores (by $34.57\%$ on average) indicate poor batch integration. This comparison underscores the lightweight and well-designed nature of sciLaMA, which improves performance while being more computationally efficient, reducing runtime by 25-fold compared to fine-tuned scGPT (Table 2). + +# 4.2. sciLaMA Reconstructs Gene Expression with High Accuracy + +We next benchmarked sciLaMA accuracy on gene-level tasks, starting with the imputation of gene expression patterns. Gene imputation, the prediction of missing or masked gene expression levels based on other genes' profiles, is particularly beneficial for sparsely measured datasets, such as Multiplexed Error-Robust Fluorescence in situ Hybridization (MERFISH) or Antibody-Derived Tags (ADTs), where only a subset of genes is typically quantified in an experiment. We benchmarked sciLaMA against leading models for gene imputation accuracy, including scProjection, gimVI, uniPort and Tangram (Johansen et al., 2023; Lopez et al., 2019; Cao et al., 2022; Biancalani et al., 2021). The experimental setup employed a leave-one-gene-out strategy, where the expression of a single gene was masked across all cells, and the models were tasked with predicting its expression pattern based on the remaining genes. + +Our results show that sciLaMA models consistently outperformed competing models in imputation accuracy (Figure 3a,b, and Table 3) on the spatial transcriptomics data (Codeluppi et al., 2018). sciLaMA achieved the highest scores across established metrics (Appendix D) (Li et al., 2022), outperforming the average performance of other benchmarked methods (Johansen et al., 2023; Lopez et al., 2019; Cao et al., 2022; Biancalani et al., 2021) by $27.39\%$ in Pearson Correlation Coefficient (PCC), $15.58\%$ in Spearman Correlation Coefficient (SCC), $32.86\%$ in 1- + +![](images/671c56b69b365b0d857ecbe0c2231ee5400313702e6d5ac815b4dac47a09df84.jpg) + +![](images/a12eb4f12ddff9ef9863ba114b1433f6ddfddb61f6302c513323bdc0158ade88.jpg) + +![](images/419bc8c36477ab82757d5b3177135c9c9c1bc28ce2d71475c902e28f479d558c.jpg) +Figure 3. Accurate imputation of unseen gene expression with sciLaMA. (a) Quantitative performance comparison of models based on sciLaMA against other methods for gene imputation task using leave-one-gene-out strategy. (b) Metric values for 30 genes from the spatial dataset across methods (color-coded). (c) Example visualizations of measured (left) and imputed (right) spatial gene expression patterns. + +![](images/54c9234f6cebb76ab890038a1103e9abbed0a39ddf6ab2d05db91f4e67aa88af.jpg) + +Jensen-Shannon Divergence (1-JSD), and $3.32\%$ in 1/Root Mean Squared Error (1/RMSE) on average. These metrics indicate that its predictions were more aligned with true gene expression patterns compared to other models (Figure 3a). Notably, the results demonstrate the significance of incorporating external gene information gain, as evidenced by sciLaMA's performance superiority over the baseline sciLaMA (s.i.) model, as well as additional baseline sciLaMA (random) and sciLaMA (shuffled) (Appendix E). Unlike sciLaMA, these baseline models do not leverage meaningful prior knowledge derived from LLMs. Specifically, sciLaMA (s.i.) utilizes a transposed single-cell expression matrix, sciLaMA (random) employs a randomized input matrix for the gene encoder, and sciLaMA (shuffled) uses shuffled external gene embeddings to intentionally disrupt dimension alignment. Collectively, these comparisons emphasize the significance of leveraging structured, semantically meaningful gene embeddings derived from LLMs to enhance generalizability. + +Figure 3c illustrates examples of measured versus imputed spatial patterns for genes such as $Cpne5$ and $Sox10$ and show sciLaMA accurately predicts expression while preserving spatial organization and region-specific heterogeneity of expression, which is crucial for understanding tissue spatial + +![](images/1a106982eb01beeae60d361dc910d09f25526bc8b29ca8a3057552a63822a4be.jpg) +Figure 4. Marker gene identification and validation using sciLaMA. (a) UMAP of human PBMC 3K dataset cell embedding using sciLaMA, with points representing cells colored by cell type and outlined by coarse cell classes using dashed circles. (b) Comparison of LLM-derived static gene embedding (top) and sciLaMA-derived contextual gene embedding (bottom) with points representing genes. Marker genes are colored by cell type specificity, and those from the same circle are relevant to the same broader cell classes. Color codes are consistent between (a) and (b). (c) A graph of a gene module identified through sciLaMA-based gene clustering, with Gene Ontology (GO) terms enriched for module-associated genes. The module includes PPBP gene, a known marker for Megakaryocytes. (d) Bar chat of the top six GO terms and significance (adjusted p-values). (e) UMAP visualization of sciLaMA contextual gene embedding on multi-source human pancreas datasets. Marker gene modules associated with different cell types are highlighted. + +![](images/5a5d9a8c0464e3389846af5c56343494adb3d490c50fad68e01089f513edd10a.jpg) + +![](images/d1913e4c87ec05c99ded744afe45477f3d42acfacd8d16953ea33d184f83099c.jpg) + +![](images/cee02e2fdfb7772920d2b1f69a9e9a5de1fdfef3031d79cf5b367c8b7d015ba2.jpg) + +Table 3. Evaluation of gene expression imputation performance on spatial transcriptomics data across multiple methods using Pearson Correlation Coefficient (PCC), Spearman Correlation Coefficient (SCC), Jensen-Shannon Divergence (JSD), and Root Mean Square Error (RMSE). A leave-one-gene-out strategy was applied on 30 measured genes. + +
MethodsPCC (↑)SCC (↑)JSD (↓)RMSE (↓)
sciLaMA (avg.)0.222 ± 0.0270.217 ± 0.0280.283 ± 0.0081.242 ± 0.022
scProjection0.177 ± 0.0290.207 ± 0.0290.352 ± 0.0321.277 ± 0.023
gimVI0.224 ± 0.0210.207 ± 0.0240.580 ± 0.0141.243 ± 0.017
uniPort0.166 ± 0.0270.184 ± 0.0270.451 ± 0.0171.287 ± 0.022
Tangram0.130 ± 0.0190.154 ± 0.0180.458 ± 0.0171.316 ± 0.015
+ +structure. + +# 4.3. sciLaMA Enables Marker Gene Identification + +In single-cell studies, identifying and validating marker genes characteristic of individual cell types is another essential process for cell type annotation traditionally dependent on expert domain knowledge. Conventionally, bioinformaticians preprocess and integrate data, cluster cells, and then experts annotate these clusters using known biomarkers or gene signatures relevant to specific cell types (Butler et al., 2018; Wolf et al., 2018). Such division of labor is time-consuming and demands extensive collaboration. sciLaMA streamlines this process by simultaneously integrating cells and implicitly organizing genes into biologically meaningful modules within its contextual gene representation space. By analyzing gene embeddings, sciLaMA can identify groups of genes that are consistently co-expressed or show coordinated patterns within specific cell types. This goes beyond simply checking the expression levels of predefined markers such as CD4 for T-cells. Instead, it reveals potentially unknown gene modules that strongly correlate + +with particular cellular states or types. sciLaMA not only reduces the manual labor involved in marker identification but also opens up possibilities for discovering new biological insights by detecting subtle, coordinated gene expression patterns that expert-driven methods might overlook. + +To assess sciLaMA's efficacy in marker gene identification, we compared its contextual gene embeddings to static embeddings from the LLMs. sciLaMA's contextualization significantly improved the clustering of markers associated with the same cell states (Figure 4a-b). For example, in the static embeddings (Figure 4b, top), marker genes for the same cell type do not cluster as expected, while in the sciLaMA contextual embeddings (Figure 4b, bottom), markers for the same cell states group together, as indicated by the circles. Moreover, PPBP is a well-established marker for Megakaryocytes (platelet precursor cells) in human peripheral blood mononuclear cells (PBMCs) (Butler et al., 2018), and sciLaMA's contextual gene embedding presents a cluster that includes it. Neighboring genes within this cluster were linked to platelet-related biological processes, cellular components, and molecular functions, confirmed via Gene Ontology (GO) enrichment analysis (Figure 4c-d) (Subramanian et al., 2005). Many of these genes, though not previously annotated as Megakaryocyte markers from the original study, exhibit strong co-expression with PPBP and functional links to platelet biology. Their coordinated clustering in biologically meaningful modules indicates their relevance to Megakaryocyte identity. + +Furthermore, sciLaMA robustly identified marker modules across multiple datasets, demonstrating its effectiveness even in the presence of batch effects (Figure 4e). Importantly, sciLaMA integrating LLM-derived priors gene knowledge outperformed the self-informed version sciL + +![](images/ec47023a67b5f8cd0d4e15b618593cf64b1c64559bee1e1db2f5d2425902a641.jpg) +a + +![](images/7bba547bd66338a53e248b9936ffce9d223d91cc653cd51712d08a0354489eef.jpg) +b + +![](images/311691ba5add1bf93c27b5aacf597c39a93c95c696a23fe467fee89f42141e01.jpg) +C +Figure 5. Enhanced developmental gene trajectory analysis with sciLaMA. (a) Overview of P0 mouse neurodevelopment data, with five cell types from early progenitors to mature excitatory neurons. (b) UMAP visualizations of cell embeddings using sciLaMA (top) and scVI (bottom) with a bar plot comparing cell type annotation and separation performance. (c) Pseudotime (x-axis) heatmap displaying the dynamic changes in gene expression across developmental stages. Rows represent ordered temporal specific genes. (d) UMAP visualizations of gene embeddings without (left) and with (right) embedding alignment using sciLaMA. Temporal specific genes (from (c)) are highlighted with color gradient. + +aMA (s.i.) across clustering metrics (Table S7), which indicates the value of leveraging pretrained static gene embeddings. These findings highlight sciLaMA's potential to streamline single-cell studies by reducing reliance on manual annotation and revealing novel biological insights, which advances gene module discovery. + +# 4.4. sciLaMA Enhances Trajectory Analysis by Unveiling Temporal Dynamics of Genes + +Building upon its strength in identifying gene markers and modules across discrete cell types, sciLaMA also excels at capturing temporal dynamics in developmental processes. This capability enables the study of continuous gene expression changes across time and facilitates the analysis of cell differentiation and developmental trajectories. + +To investigate sciLaMA's capability in this context, we conducted pseudotime trajectory analysis using cell embeddings learned by sciLaMA and compared them with those from scVI, a SOTA single-cell model. The analysis was applied to a dataset capturing P0 mouse cortex development (Figure 5a) (Chen et al., 2019). Pseudotime visualizations (Figure 5b, and Appendix E) illustrated that sciLaMA provided + +Table 4. Cell representation learning performance on P0 mouse neurodevelopment dataset, with ARI and NMI quantifying cluster annotation accuracy, and ASW and cLISI quantifying cell type separation. + +
MethodsARI ↑NMI ↑ASW ↑cLISI ↑
sciLaMA0.3160.3510.5180.738
scVI0.2840.2910.5010.501
+ +clearer transitions between developmental stages, such as the progression from intermediate progenitors (IPs) to layer-2-3 excitatory neurons (ExNs). sciLaMA outperformed scVI in trajectory clarity by $20.65\%$ overall (Table 4). + +Pseudotime-aligned heatmaps of gene expression (Figure 5c, and Appendix E) highlighted temporal-specific genes with coordinated expression shifts corresponding to distinct stages of cell differentiation. Additionally, sciLaMA's contextual gene embeddings further illuminated temporal relationships between genes, offering insights into the sequential activation of developmental markers (Figure 5d). This analysis provides a comprehensive perspective on the dynamic interplay of genes during cell differentiation and development. + +By accurately mapping cell lineages and identifying stage-specific gene modules, sciLaMA provides researchers with a powerful tool for understanding cell differentiation and developmental processes. When applied to organoid datasets, sciLaMA can also compare developmental trajectories of organoids with those of real tissues. For example, it can identify which gene modules from real tissues correspond to specific stages in organoid development, aiding in the assessment of organoid fidelity. This capability has significant implications for therapeutic strategies, enabling researchers to evaluate how organoids can model human diseases and inform potential treatment designs. + +# 5. Conclusion + +This study introduces sciLaMA, a novel framework that integrates external gene knowledge from language models with single-cell expression data to address critical challenges in single-cell analysis and enable comprehensive downstream tasks spanning both cell-level and gene-level analyses. Our experiments demonstrate the framework's effectiveness and performance superiority, and highlight the value of incorporating external gene knowledge through an innovative design. These findings establish sciLaMA as a powerful tool for advancing our understanding of cellular heterogeneity and gene regulation, and showcase how language models can be leveraged through a lightweight adapter framework. + +# Acknowledgements + +This work began during H.H.'s research internship at Microsoft Research and was supported by an NSF CAREER award (1846559, G.Q.). Additional funding came from the National Institutes of Health, including the Office of the Director/National Institute of Mental Health (DP2 MH129987, G.Q.) and the National Institute of Child Health and Human Development (P50 HD103526). We thank Erdal Cosgun, Shuangjia Lu, and the Microsoft Research Health Futures team for their guidance and technical insights. + +# Impact Statement + +This paper aims to integrate contextual and general knowledge about cells and genes to enhance single-cell data analysis. By leveraging large language models from different modalities, our methodology supports critical tasks with improved performance and computational efficiency. This approach has the potential to advance the field of Machine Learning in scientific research by facilitating the study of cellular heterogeneity, gene regulation, and developmental processes across various biological contexts. Furthermore, its ability to streamline analysis reduces reliance on extensive manual annotation and switching between disparate tools on various tasks. The implications of this approach include its contribution to biomedical research, where it may support therapeutic discovery and precision medicine. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. + +# References + +Baron, M., Veres, A., Wolock, S. L., Faust, A. L., Gaujoux, R., Vetere, A., Ryu, J. H., Wagner, B. K., Shen-Orr, S. S., Klein, A. M., Melton, D. A., and Yanai, I. A single-cell transcriptomic map of the human and mouse pancreas reveals inter- and intra-cell population structure. Cell Systems, 3(4):346-360, 2016. doi: 10.1016/j.cels.2016.08.011. +Biancalani, T., Scalia, G., Buffoni, L., Avasthi, R., Lu, Z., Sanger, A., Tokcan, N., Vanderburg, C. R., Segerstolpe, Å., Zhang, M., et al. Deep learning and alignment of spatially resolved single-cell transcriptomes with tangram. Nature Methods, 18(11):1352-1362, 2021. doi: 10.1038/s41592-021-01264-7. +Butler, A., Hoffman, P., Smibert, P., Papalexi, E., and Satija, R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nature Biotechnology, 36(5):411-420, 2018. doi: 10.1038/nbt.4096. +Cao, K., Gong, Q., Hong, Y., and Wan, L. A unified computational framework for single-cell data integration with optimal transport. Nature Communications, 13(1):7419, 2022. doi: 10.1038/s41467-022-35094-8. +Cao, Z.-J. and Gao, G. Multi-omics single-cell data integration and regulatory inference with graph-linked embedding. Nature Biotechnology, 40(10):1458-1466, 2022. doi: 10.1038/s41587-022-01284-4. +Chen, S., Lake, B. B., and Zhang, K. High-throughput sequencing of the transcriptome and chromatin accessibility in the same cell. Nature Biotechnology, 37(12): 1452-1457, 2019. doi: 10.1038/s41587-019-0290-0. +Chen, X., Xu, J., Zhou, R., Chen, W., Fang, J., and Liu, C. Trajvae: A variational autoencoder model for trajectory generation. Neurocomputing, 428:332-339, 2021. doi: 10.1016/j.neucom.2020.03.120. +Chen, Y. and Zou, J. Simple and effective embedding model for single-cell biology built from chatgpt. Nature Biomedical Engineering, 2024. doi: 10.1038/s41551-024-01284-6. +Choi, Y., Li, R., and Quon, G. sivae: Interpretable deep generative models for single-cell transcriptomes. Genome Biology, 24(1):29, 2023. doi: 10.1186/s13059-023-02850-y. +Codeluppi, S., Borm, L. E., Zeisel, A., La Manno, G., van Lunteren, J. A., Svensson, C. I., and Linnarsson, S. Spatial organization of the somatosensory cortex revealed by osmfish. Nature Methods, 15(11):932-935, 2018. doi: 10.1038/s41592-018-0175-z. + +Cui, H., Wang, C., Maan, H., Pang, K., Luo, F., Duan, N., and Wang, B. scgt: Toward building a foundation model for single-cell multi-omics using generative ai. Nature Methods, 21(8):1470-1480, 2024. doi: 10.1038/s41592-024-02201-0. +Elnaggar, A., Heinzinger, M., Dallago, C., Rehawi, G., Wang, Y., Jones, L., Gibbs, T., Feher, T., Angerer, C., Steinegger, M., Bhowmik, D., and Rost, B. Protrans: Toward understanding the language of life through self-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):7112-7127, 2022. doi: 10.1109/TPAMI.2021.3095381. +Grønbech, C. H., Vording, M. F., Timshel, P. N., Sønderby, C. K., Pers, T. H., and Winther, O. scvae: Variational auto-encoders for single-cell gene expression data. Bioinformatics, 36(16):4415-4422, 2020. doi: 10.1093/bioinformatics/btaa293. +Janizek, J. D., Spiro, A., Celik, S., Blue, B. W., Russell, J. C., Lee, T.-I., Kaeberlin, M., and Lee, S.-I. Pause: Principled feature attribution for unsupervised gene expression analysis. Genome Biology, 24(1):81, 2023. doi: 10.1186/s13059-023-02901-4. +Johansen, N., Hu, H., and Quon, G. Projecting rna measurements onto single cell atlases to extract cell typespecific expression profiles using scprojection. Nature Communications, 14(1):5192, 2023. doi: 10.1038/s41467-023-40744-6. +Kana, O., Nault, R., Filipovic, D., Marri, D., Zacharewski, T., and Bhattacharya, S. Generative modeling of single-cell gene expression for dose-dependent chemical perturbations. *Patterns*, 4(8):100817, 2023. doi: 10.1016/j.patter.2023.100817. +Kedzierska, K. Z., Crawford, L., Amini, A. P., and Lu, A. X. Assessing the limits of zero-shot foundation models in single-cell biology. 2023. doi: 10.1101/2023.10.16.561085. +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2014. doi: 10.48550/arXiv.1312.6114. +Lähnemann, D., Köster, J., Szczurek, E., McCarthy, D. J., Hicks, S. C., Robinson, M. D., Vallejos, C. A., Campbell, K. R., Beerenwinkel, N., Mahfouz, A., et al. Eleven grand challenges in single-cell data science. Genome Biology, 21(1):31, 2020. doi: 10.1186/s13059-020-1926-6. +Li, B., Zhang, W., Guo, C., Xu, H., Li, L., Fang, M., Hu, Y., Zhang, X., Yao, X., Tang, M., et al. Benchmarking spatial and single-cell transcriptomics integration methods for transcript distribution prediction and cell type + +deconvolution. Nature Methods, 19(6):662-670, 2022. doi: 10.1038/s41592-022-01480-9. +Lin, Z., Akin, H., Rao, R., Hie, B., Zhu, Z., Lu, W., Smetanin, N., Verkuil, R., Kabeli, O., Shmueli, Y., et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. 2023. +Liu, T., Chen, T., Zheng, W., Luo, X., and Zhao, H. scelmo: Embeddings from language models are good learners for single-cell data analysis. 2023. +Lopez, R., Regier, J., Cole, M. B., Jordan, M. I., and Yosef, N. Deep generative modeling for single-cell transcriptomics. Nature Methods, 15(12):1053-1058, 2018. doi: 10.1038/s41592-018-0229-2. +Lopez, R., Nazaret, A., Langevin, M., Samaran, J., Regier, J., Jordan, M. I., and Yosef, N. A joint model of unpaired data from scrna-seq and spatial transcriptomics for imputing missing gene expression measurements. arXiv preprint arXiv:1905.02269, 2019. +Luecken, M. D., Böttner, M., Chaichoompu, K., Danese, A., Interlandi, M., Mueller, M. F., Strobl, D. C., Zappia, L., Dugas, M., Colomé-Tatché, M., and Theis, F. J. Benchmarking atlas-level data integration in single-cell genomics. Nature Methods, 19(1):41-50, 2022. doi: 10.1038/s41592-021-01336-8. +McInnes, L., Healy, J., and Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2020. doi: 10.48550/arXiv.1802.03426. +Muraro, M. J., Dharmadhikari, G., Grün, D., Groen, N., Dielen, T., Jansen, E., van Gurp, L., Engelse, M. A., Carlotti, F., de Koning, E. J. P., and van Oudenaarden, A. A single-cell transcriptome atlas of the human pancreas. Cell Systems, 3(4):385-394, 2016. doi: 10.1016/j.cels.2016.09.002. +OpenAI. New and improved embedding model: Text-embedding-ada-002. 2022. URL https://openai.com/index/new-and-improved-embedding-model/. +Rosen, Y., Brbic, M., Roohani, Y., Swanson, K., Li, Z., and Leskovec, J. Toward universal cell embeddings: Integrating single-cell rna-seq datasets across species with saturn. Nature Methods, 21(8):1492-1500, 2024. doi: 10.1038/s41592-024-02191-z. +Segerstolpe, Å., Palasantza, A., Eliasson, P., Andersson, E.-M., Andréasson, A.-C., Sun, X., Picelli, S., Sabirsh, A., Clausen, M., Bjursell, M. K., et al. Single-cell transcriptome profiling of human pancreatic islets in health and type 2 diabetes. Cell Metabolism, 24(4):593-607, 2016. doi: 10.1016/j.cmet.2016.08.020. + +Subramanian, A., Tamayo, P., Mootha, V. K., Mukherjee, S., Ebert, B. L., Gillette, M. A., Paulovich, A., Pomeroy, S. L., Golub, T. R., Lander, E. S., and Mesirov, J. P. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proceedings of the National Academy of Sciences, 102(43): 15545-15550, 2005. doi: 10.1073/pnas.0506580102. +Theodoris, C. V., Xiao, L., Chopra, A., Chaffin, M. D., Al Sayed, Z. R., Hill, M. C., Mantineo, H., Brydon, E. M., Zeng, Z., Liu, X. S., and Ellinor, P. T. Transfer learning enables predictions in network biology. Nature, 618(7965):616-624, 2023. doi: 10.1038/s41586-023-06139-9. +Tran, H. T. N., Ang, K. S., Chevrier, M., Zhang, X., Lee, N. Y. S., Goh, M., and Chen, J. A benchmark of batch-effect correction methods for single-cell rna sequencing data. Genome Biology, 21(1):12, 2020. doi: 10.1186/s13059-019-1850-9. +Wang, Y. J., Schug, J., Won, K.-J., Liu, C., Naji, A., Avrahami, D., Golson, M. L., and Kaestner, K. H. Single-cell transcriptomics of the human endocrine pancreas. Diabetes, 65(10):3028-3038, 2016. doi: 10.2337/db16-0405. +Wen, H., Tang, W., Dai, X., Ding, J., Jin, W., Xie, Y., and Tang, J. Cellplm: Pre-training of cell language model beyond single cells. 2023. doi: 10.1101/2023.10.03.560734. +Wolf, F. A., Angerer, P., and Theis, F. J. Scanpy: Large-scale single-cell gene expression data analysis. Genome Biology, 19(1):15, 2018. doi: 10.1186/s13059-017-1382-0. +Xin, Y., Kim, J., Okamoto, H., Ni, M., Wei, Y., Adler, C., Murphy, A. J., Yancopoulos, G. D., Lin, C., and Gromada, J. RNA sequencing of single human islet cells reveals type 2 diabetes genes. Cell Metabolism, 24(4):608-615, 2016. doi: 10.1016/j.cmet.2016.08.018. +Yan, J., Ma, M., and Yu, Z. bmvae: A variational autoencoder method for clustering single-cell mutation data. Bioinformatics, 39(1):btac790, 2023. doi: 10.1093/bioinformatics/btac790. +Zeisel, A., Muñoz-Manchado, A. B., Codeluppi, S., Lönnerberg, P., La Manno, G., Juréus, A., Marques, S., Munguba, H., He, L., Betsholtz, C., et al. Cell types in the mouse cortex and hippocampus revealed by single-cell rna-seq. Science, 347(6226):1138-1142, 2015. doi: 10.1126/science.aaa1934. + +# Appendix sciLaMA: A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models + +# A. Model Input Processing: + +# A.1. Cell Encoder Input + +As mentioned in Section 3 Methods, the input for cell Encoder is the scRNA-seq data for specific cell population $c$ . Therefore, $c_{i,j}$ denotes the scaled log-normalized expression value of gene $j$ in cell $i$ . + +The raw scRNA-seq expression matrix, $c^{\mathrm{raw}}$ , is a sparse count matrix. For use in sciLaMA, the data after quality control (QC) is processed through library size normalization and feature-wise z-score scaling to achieve zero mean and unit variance. Values beyond $\pm 10$ are clipped. Specifically, the normalized expression $c^{\mathrm{norm}}$ is calculated as: + +$$ +\boldsymbol {c} _ {i, j} ^ {\text {n o r m}} = \log_ {e} \left(1 + 1 0 ^ {4} \times \frac {\boldsymbol {c} _ {i , j} ^ {\text {r a w}}}{\sum_ {k = 1} ^ {m} \boldsymbol {c} _ {i , k} ^ {\text {r a w}}}\right) +$$ + +Here, $c_{i,j}^{\mathrm{raw}}$ represents the raw count value of gene $j$ in cell $i$ , and $\sum_{k=1}^{m} c_{i,k}^{\mathrm{raw}}$ is the total expression counts number for cell $i$ . The multiplication by $10^4$ ensures a standardized size factor for normalization. This normalization procedure adjusts for library size differences across cells and prepares the data for following analysis. + +# A.2. Gene Encoder Input + +In this study, sciLaMA integrated static gene embeddings from six external sources across three distinct modalities. + +
SourceDimensionalityModality
ChatGPT1536Text
GenePT (NCBI)1536Text
ESM5120Protein Sequence
ProtTrans1024Protein Sequence
scGPT512Single Cell
CellPLM1024Single Cell
+ +Table S1. Gene Embedding Sources and Characteristics. + +# A.2.1 Natural Language Embeddings + +We acquired text description-based gene embeddings from two studies: GenePT and scELMo (Chen & Zou, 2024; Liu et al., 2023), utilizing the OpenAI text-embedding-ada-002 model (OpenAI, 2022). These embeddings were generated using two distinct text corpora: GPT-3.5 generated descriptions (referred to as ChatGPT) and National Center for Biotechnology Information (NCBI) gene card summaries (referred to as GenePT). We obtained 1,536-dimensional static embeddings for each gene ( $d = 1,536$ ). + +# A.2.2 Protein Language Embeddings + +We derived protein sequence-based gene embeddings from two protein language models: ESM2 t48_15B_UR50D with 5,120-dimensional embeddings per gene (Lin et al., 2023), and ProtXLNet from ProtTrans with 1,024-dimensional embeddings (Elnaggar et al., 2022) from the SATURN study (Rosen et al., 2024). These embeddings were generated using the amino acid sequences of each corresponding gene. + +# A.2.3 Single-Cell Gene Language Embeddings + +For single-cell foundation models, we retrieved static gene embeddings from two pretrained models: scGPT-whole-human (512-dimensional embeddings) (Cui et al., 2024) and cellPLM (1,024-dimensional embeddings) (Wen et al., 2023). The scGPT embeddings were obtained using the model's GitHub tutorial, while cellPLM embeddings were extracted from the embedder module's feature encoder parameters, as directed by the authors. + +# B. Model Optimization Illustration: + +The sciLaMA model optimization process, comprehensively described in Section 3 Methods, is illustrated through a stepwise training strategy visualization (Figure S1). The optimal hyperparameter values chosen for our experiments are scalar $\gamma = 0.05$ and latent dimensionality $K = 40$ , based on the evaluations presented in Table S8 and Table S9. + +![](images/cc9cbd21775efcb80c6e565a2d51edbc42638bd7a36bdcf0f4e9d20fe2350b0d.jpg) +Pretrain cellVAE + +![](images/153a167275beec006470f66ba8e2b6e23bad7ec6bf83edb7a74137a4117cb968.jpg) +Freeze cellVAE, pretrain geneVAE +Figure S1. Schematic representation of the progressive optimization workflow for the sciLaMA framework. (Box indicates freezing parameters.) + +![](images/3c79a395074846269e10f73777ea9294f4092c109fddbfaf4185ea3ea2f95268.jpg) +Jointly refine paired VAEs of sciLaMA + +# C. Dataset Introduction: + +# C.1. Experiment 1: Cell Representation Learning Benchmarking + +This experiment benchmarks cell representation learning methods using a combination of single-cell RNA sequencing datasets derived from five studies focused on the pancreas. The data includes a total of 14,767 cells spanning 13,062 genes (after intersection with precomputed static gene embeddings). + +Datasets Used: Baron et al.: 8,569 cells (Baron et al., 2016); Segerstolpe et al.: 2,127 cells (Segerstolpe et al., 2016); Muraro et al.: 2,122 cells (Muraro et al., 2016); Xin et al.: 1,492 cells (Xin et al., 2016); Wang et al.: 457 cells (Wang et al., 2016). + +The aggregated dataset was a gold-standard benchmarking dataset originally analyzed in the context of batch-effect correction, as described in Tran et al., in 2020 (Tran et al., 2020). This benchmarking experiment evaluates the performance of cell representation learning in mitigating batch effects while preserving biological signal. + +# C.2. Experiment 2: Gene Expression Imputation Benchmarking + +This experiment evaluates the accuracy of gene expression imputation approaches by leveraging two complementary datasets (Zeisel et al., 2015; Codeluppi et al., 2018): + +# Reference scRNA-seq Dataset (Zeisel et al., 2015): + +Number of cells: 3,005 +- Total genes: 19,972, with 3,654 highly variable genes selected for benchmarking. +Validation: A $10\%$ validation split is used for early stopping during model training. + +# Spatial Transcriptomics Dataset (Codeluppi et al., 2018): + +Number of spatial spots: 4,530 + +- Genes: 30, analyzed using a leave-one-gene-out approach to simulate imputation scenarios. + +This setup allows for assessing the generalizability of gene imputation models. + +# C.3. Experiment 3: Marker Gene Identification + +This experiment focuses on identifying marker genes for distinct cell types using the human Peripheral Blood Mononuclear Cell (PBMC) 3K dataset from 10x Genomics, a legacy dataset widely utilized in tools like Seurat (Butler et al., 2018) and scanpy (Wolf et al., 2018) tutorials. The ground truth gene markers and cell type annotations were obtained from the tutorials (https://satijalab.org/seurat/articles/pbmc3k_tutorial, and https://scanpy.readthedocs.io/en/stable/tutorials/ basics/clustering-2017.html). + +# Dataset Details: + +- Initial Size: 2,700 cells $\times$ 32,738 genes + +- Final Size: 2,700 cells $\times$ 9,540 genes (post-filtering and intersection with static gene embeddings). + +
Cell TypeNumber of cells
CD4 T cells1158
CD14 Monocytes487
B cells357
CD8 T cells329
FCGR3A Monocytes160
NK cells160
Dendritic cells36
Megakaryocytes13
+ +# C.4. Experiment 4: Trajectory Analysis and Temporal Dynamic Gene Discovery + +This experiment investigates gene dynamics along developmental trajectories using the P0 mouse cortex dataset from the SNARE-seq study (Chen et al., 2019). The original SNARE-seq dataset includes both transcriptomic and epigenomic information from the same single cells, but we only utilized the transcriptomic data with 1,469 cells and 8,293 genes (after intersection with precomputed static gene embeddings). This experiment focuses on uncovering temporally dynamic genes critical for neurodevelopmental processes. The ground truth gene markers and cell type annotations were obtained from the original study. + +Table S2. Cell Type Statistics of human PBMC 3K dataset + +
Cell TypeNumber of cells
IP_Hmgn2214
IP_Gadd45g99
IP_Eomes437
Ex23_Cntn2177
Ex23_Cux1542
+ +Table S3. Cell Type Statistics of mouse P0 cortex dataset + +# D. Benchmarking Metrics Introduction: + +To comprehensively evaluate the performance of various methods, we employ metrics tailored to different aspects of single-cell data analysis, including cluster annotation accuracy, cell type separation, batch mixing quality, and predictive/imputation accuracy (Li et al., 2022; Luecken et al., 2022), briefly summarized below: + +# D.1. Clustering and Annotation Accuracy + +To assess the biological relevance of clustering and annotation based on the learned embeddings, we employ: + +- Adjusted Rand Index (ARI): Measures the agreement between predicted and ground-truth cluster labels, adjusted for chance. A higher ARI indicates better alignment between predicted clusters and original biological annotations, reflecting more accurate and biologically meaningful clustering. +- Normalized Mutual Information (NMI): Quantifies the mutual dependence between predicted clusters and ground-truth cell type annotation labels, normalized to account for the total number of clusters. A higher NMI indicates better clustering accuracy. + +# D.2. Cell Type Separation + +To evaluate how well methods preserve separation between distinct cell types, we employ: + +- Average Silhouette Width (ASW): Evaluates the cohesion within clusters and the separation between them. Higher ASW scores indicate that cells within the same cluster are more similar to each other than to cells in other clusters, signifying well-defined clusters. +- Graph Cell-Type Integration Local Inverse Simpson's Index (cLISI): Measures the local diversity of cell types within neighborhoods in an integrated graph representation. High cLISI values suggest better grouping of similar cell types in the embedding space. + +# D.3. Batch-Effect Correction Quality + +To evaluate batch effect removal while preserving biological variance, we apply: + +- Batch-Adjusted Silhouette Width (batchASW): Evaluates the extent of batch mixing while penalizing over-mixing of unrelated cells. Higher batchASW scores indicate better batch integration without compromising biological separation. +- Graph Integration Local Inverse Simpson's Index (iLISI): Measures the diversity of batch labels within local neighborhoods of an integrated graph. Higher iLISI scores indicate more uniform batch mixing, reflecting better integration while preserving cell type integrity. + +# D.4. Predictive Accuracy and Divergence Metrics + +For imputation and gene expression prediction tasks, we employ: + +- Pearson Correlation Coefficient (PCC): Assesses linear relationships between predicted and observed gene expression values, with higher values indicating stronger correlations. +- Spearman Correlation Coefficient (SCC): Evaluates rank-based relationships, capturing monotonic correlations between predicted and observed values, providing insights into the consistency of expression patterns. +- Jensen-Shannon Divergence (JSD): Measures the similarity between predicted and true gene expression distributions. Lower JSD values indicate better agreement between the two distributions. +- Root Mean Square Error (RMSE): Quantifies the average magnitude of errors between predicted and observed values. Lower RMSE scores reflect higher accuracy + +D.5. Clustering Quality Metrics To evaluate the geometric coherence and separation of clusters in the learned gene embedding space, we include two additional metrics: +Davies-Bouldin Index (DBI): Quantifies the ratio of intra-cluster dispersion to inter-cluster separation. Lower DBI values indicate better-defined clusters with high intra-cluster similarity and distinct separation between clusters. +- Calinski-Harabasz Score (CHS): Measures the ratio of between-cluster dispersion to within-cluster dispersion. Higher CHS values reflect dense, well-separated clusters + +# E. Supplementary Results: + +
Methodsw\ external knowledgeARI ↑NMI ↑ASW ↑batchASW ↑iLISI ↑
sciLaMA-GenePT0.5450.7670.5390.8630.240
sciLaMA-CellPLM0.4790.7230.5410.8710.257
sciLaMA-ProtTrans0.5470.7490.5380.8640.229
sciLaMA-ChatGPT0.5450.7620.5340.8630.225
sciLaMA-scGPT0.5220.7460.5260.8670.223
sciLaMA-ESM0.4940.7220.5290.8640.253
sciLaMA (s.i.)×0.4360.6980.5390.8320.210
+ +Table S4. Cell representation learning and integration performance on human pancreatic datasets across variants of sciLaMA models + +
MethodscLISI ↑
sciLaMA-GenePT0.995
sciLaMA-CellPLM0.995
sciLaMA-ProtTrans0.992
sciLaMA-ChatGPT0.993
sciLaMA- scGPT0.993
sciLaMA- ESM0.993
sciLaMA (s.i.)0.987
scGPT fine-tuned0.998
scVI-batch0.982
scVI-raw0.972
scGPT zero-shot0.951
CellPLM zero-shot0.961
GenePT-w0.838
+ +Table S5. Graph Cell-Type Integration Local Inverse Simpson's Index (cLISI) scores across methods (listed as supplementary result due to the low variance of 0.001714) + +
Methodsw\external knowledgePCC (↑)SCC (↑)JSD (↓)RMSE (↓)
sciLaMA-GenePT0.220 ± 0.0290.214 ± 0.0310.280 ± 0.0091.243 ± 0.023
sciLaMA-CellPLM0.222 ± 0.0270.218 ± 0.0280.286 ± 0.0091.242 ± 0.022
sciLaMA-ProtTrans0.218 ± 0.0260.211 ± 0.0280.283 ± 0.0091.246 ± 0.021
sciLaMA-ChatGPT0.219 ± 0.0270.217 ± 0.0270.282 ± 0.0091.244 ± 0.022
sciLaMA-scGPT0.219 ± 0.0270.217 ± 0.0270.285 ± 0.0091.244 ± 0.022
sciLaMA-ESM0.233 ± 0.0260.227 ± 0.0270.282 ± 0.0091.233 ± 0.022
sciLaMA (s.i.)×0.202 ± 0.0270.212 ± 0.0250.286 ± 0.0091.258 ± 0.022
sciLaMA (random)×0.051 ± 0.0270.049 ± 0.0310.289 ± 0.0091.374 ± 0.020
sciLaMA (shuffled)×0.056 ± 0.0360.043 ± 0.0370.288 ± 0.0091.366 ± 0.027
+ +Table S6. Evaluation of gene expression imputation performance on spatial transcriptomics data across variants of sciLaMA models + +
Methodsw/ external knowledgeDavies-Bouldin Index (↓)Calinski-Harabasz Score (↑)
sciLaMA-GenePT0.85216.376
sciLaMA-CellPLM0.72719.610
sciLaMA-ProtTrans0.80219.947
sciLaMA-ChatGPT0.87416.522
sciLaMA-scGPT0.78017.973
sciLaMA-ESM0.78016.920
sciLaMA (s.i.)×0.97713.087
+ +Table S7. Clustering performance comparison for marker gene identification across variants of sciLaMA models + +
γARI meanARI stdNMI meanNMI stdASW meanASW stdcLISI meancLISI std
00.4640.3710.5130.3900.5890.0870.8810.170
0.010.5820.0210.7340.0220.6450.0100.9900.002
0.050.6650.1140.7630.0410.6540.0050.9910.002
0.10.6340.0880.7510.0230.6580.0120.9920.002
0.250.5810.0240.7430.0130.6550.0060.9900.002
0.50.5920.0100.7470.0040.6580.0120.9930.001
0.750.5900.0150.7480.0130.6560.0120.9920.002
10.6470.1070.7620.0380.6510.0250.9930.003
+ +Table S8. Effect of scalar $\gamma$ on clustering performance across multiple metrics. + +
KLatent dimARI meanARI stdNMI meanNMI stdASW meanASW stdcLISI meancLISI std
100.6510.1100.7560.0420.6540.0050.9910.002
200.6270.0820.7610.0250.6310.0150.9910.000
300.5830.0170.7420.0220.6330.0140.9910.001
400.6800.1140.7710.0390.6370.0090.9910.002
500.6060.0850.7430.0230.6310.0100.9900.002
600.6490.1000.7570.0300.6310.0150.9900.002
700.5900.0370.7380.0170.6350.0090.9910.001
800.6490.0950.7530.0200.6320.0090.9910.001
900.6560.1020.7560.0300.6370.0080.9910.001
1000.6410.1080.7510.0270.6350.0060.9910.002
+ +Table S9. Effect of varying latent dimension $K$ on clustering performance across multiple metrics. + +![](images/2384353210108ca106e8468a82d28025afb0bef59ce100ab6939c1b3ada7a15e.jpg) + +![](images/f04e281a6b311f55789035ed4b1b06f4318d38a046b733ce07a5b5763f93c931.jpg) + +![](images/24413e665a1115fd3fa6253299743600afcec905764741c1c85178c6392b0120.jpg) +Figure S2. Benchmark of cell representation learning. (a) Radar plot showing the performance across six established metrics, comparing single cell SOTA methods (scVI and scVI-batch), zero-shot models (GenePT-w, CellPLM, and scGPT), a fine-tuned model (scGPT), and comparable sciLaMA-based models (sciLaMA-GenePT/CellPLM/scGPT). (b-c) UMAP visualizations of cell embeddings derived from various models, with colors indicating cell types (top) and batch origins (bottom). (b) includes foundation models in zero-shot mode, while (c) presents sciLaMA-based models in additional to those from Figure 2c. + +![](images/b891357e46bfae6ffb86c34a59fb0d508bf3a28416ba8a503bef832df2d20c47.jpg) +Figure S3. Enhanced developmental cell trajectory analysis with sciLaMA. (a) UMAP visualizations of cell embeddings from sciLaMA (top) and scVI (bottom) colored by inferred pseudotime via Palantir. (b) Heatmaps of dynamic gene expression changes by pseudotime (x-axis) with genes ordered by temporal specificity (y-axis). Top shows sciLaMA-based pseudotime, bottom shows scVI results. + +![](images/67ba9ebb76f1d751a5f50b5b562a481a180964de7f7d00588eeb3648c4a10c4a.jpg) + +![](images/725c615026b594e233df1dc5ffc468224ca7efa300f8939e0f2767cdfa6437bb.jpg) +Figure S4. This visualization presents results from sciLaMA (with latent dimensionality $K = 2$ ) trained on an exemplar fetal liver dataset (Choi et al., 2023). The cell embeddings (middle-left; dots represent individual cells) are colored according to annotated cell types, while the contextual gene embeddings (middle-right; dots represent individual genes) show corresponding embedding dimensions. The top and bottom panels illustrate cell embeddings colored by expression levels of genes sampled from distinct regions of the contextual gene embedding space. We quantitatively assessed sciLaMA's interpretability by computing feature attribution scores using Integrated Gradients applied to the pretrained cell VAE encoder. This resulted in a gene-by-latent-node attribution matrix, which we summarized into a single gene vector by aggregating absolute attribution scores. To facilitate comparison, we projected sciLaMA's contextual gene embeddings into a comparable vector form using vector normalization. The strong correlations (Pearson $r = 0.43$ , Spearman $\rho = 0.46$ ) indicate that sciLaMA effectively captures key gene features in alignment with traditional stepwise attribution methods, yet does so within a more efficient, unified framework. \ No newline at end of file diff --git a/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/images.zip b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aa449563df771c658fd004d83cc8fe1306b30a3b --- /dev/null +++ b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb6f704c5ff19bbf5774d951339b22c1616f4c056dcc18f53ebdaf71d9c47442 +size 1291220 diff --git a/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/layout.json b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..763bfd593fc4eeaa2f9bf1e5261b3b049716e725 --- /dev/null +++ b/ICML/2025/sciLaMA_ A Single-Cell Representation Learning Framework to Leverage Prior Knowledge from Large Language Models/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af0606bc67e25749ecfb4477ea19626d849fac66c735a2696114982324068220 +size 572883 diff --git a/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_content_list.json b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..54bb9dd5e28aaab0dfca8db8f280a5cf98418d17 --- /dev/null +++ b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5439e1e93cc18040878e01dd221f85b9e9e6a44c819dc9a33c02a9d70c499b9 +size 197949 diff --git a/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_model.json b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e4dd2511df9e581d346978d799f2e1edcb2ce339 --- /dev/null +++ b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6589999b53e7541a33faa5bd77ab0d8aafd514db9e8df399c87aa9b7decb62c4 +size 244992 diff --git a/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_origin.pdf b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..130202521b4c4f9be9079350e91a9ff9f0103624 --- /dev/null +++ b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/e2e1d368-20c6-4de3-affb-ce16dbfc7f00_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f177b84bf94fb8d73eea4708843d7496a4c832045d86dc3a1954ca1460316e46 +size 19607228 diff --git a/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/full.md b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7412d1970f938be77b32c2275e712bace640d8a6 --- /dev/null +++ b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/full.md @@ -0,0 +1,751 @@ +# unMORE: Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning + +Yafei Yang $^{12}$ Zihui Zhang $^{12}$ Bo Yang $^{12}$ + +# Abstract + +We study the challenging problem of unsupervised multi-object segmentation on single images. Existing methods, which rely on image reconstruction objectives to learn objectness or leverage pretrained image features to group similar pixels, often succeed only in segmenting simple synthetic objects or discovering a limited number of real-world objects. In this paper, we introduce unMORE, a novel two-stage pipeline designed to identify many complex objects in real-world images. The key to our approach involves explicitly learning three levels of carefully defined object-centric representations in the first stage. Subsequently, our multi-object reasoning module utilizes these learned object priors to discover multiple objects in the second stage. Notably, this reasoning module is entirely network-free and does not require human labels. Extensive experiments demonstrate that unMORE significantly outperforms all existing unsupervised methods across 6 real-world benchmark datasets, including the challenging COCO dataset, achieving state-of-the-art object segmentation results. Remarkably, our method excels in crowded images where all baselines collapse. Our code and data are available at https://github.com/vLAR-group/unMORE + +# 1. Introduction + +By age two, humans can learn around 300 object categories and recognize multiple objects in unseen scenarios (Frank et al., 2016). For example, after reading a book about the Animal Kingdom where each page illustrates a single creature, children can effortlessly recognize multiple similar animals at a glance when visiting a zoo, without needing additional + +1Shenzhen Research Institute, The Hong Kong Polytechnic University; 2vLAR Group, The Hong Kong Polytechnic University. Correspondence to: Bo Yang . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +teaching on site. Inspired by this efficient skill of perceiving objects and scenes, we aim to introduce a new framework to identify multiple objects from single images by learning object-centric representations, rather than relying on costly scene-level human annotations for supervision. + +Existing works for unsupervised multi-object segmentation mainly consist of two categories: 1) Slot-based methods represented by SlotAtt (Locatello et al., 2020) and its variants (Sajjadi et al., 2022; Didolkar et al., + +![](images/c8d0fd5d0ad3d14b27048d8d8e5e96dd181e239943aff8547f57a5da9f58cbe5.jpg) +Figure 1: Object images. + +2024). They usually rely on an image reconstruction objective to drive the slot-structured bottlenecks to learn object representations. While achieving successful results on synthetic datasets (Karazija et al., 2021; Greff et al., 2022), they often fail to scale to complex real-world images (Yang & Yang, 2022; 2024). 2) Self-supervised feature distillation based methods such as TokenCut (Wang et al., 2022b), DINOSAUR (Seitzer et al., 2023), CutLER (Wang et al., 2023a), and CuVLER (Arica et al., 2024). Thanks to the strong object localization hints emerging from self-supervised pretrained features such as DINO/v2 (Caron et al., 2021; Oquab et al., 2023), these methods explore this property to discover multiple objects via feature reconstruction or pseudo mask creation for supervision. Despite obtaining very promising segmentation results on real-world datasets such as COCO (Lin et al., 2014), they still fail to discover a satisfactory number of objects. Primarily, this is because the simple feature reconstruction or pseudo mask creation for supervision tends to distill or define rather weak objectness followed by ineffective object search, resulting in only a few objects correctly discovered. In fact, unsupervised multi-object segmentation of a single image is hard and not straightforward, as it involves two critical issues: 1) the definition of what objects are (i.e., objectness) is unclear, 2) there is a lack of an effective way to discover those objects in unseen scenes. + +In this paper, to tackle these issues, we propose a two-stage pipeline consisting of an object-centric representation learn- + +![](images/fbfb3373df60f3d69767fa60eb9b3e12c95ecc07571f79c2a2c42dfd9f8838ee.jpg) +(a) Object Existence +(b) Object Center Field + +![](images/a637a2361d45bdcfb1eee82b1267de8b153d4141a87175507aa6c5404aa3e239.jpg) +(c) Object Boundary Distance Field +Figure 2: The upper blocks illustrate our framework. The lower blocks show three levels of object-centric representations. + +ing stage followed by an effective multi-object reasoning stage, akin to a human's innate skill of perceiving objects and scenes. As illustrated in the upper left block of Figure 2, in the first stage, we aim to train an objectness network to learn our explicitly defined object-centric representations from monolithic object images such as those in ImageNet. In the second stage, as illustrated in the right block of Figure 2, we introduce a multi-object reasoning module to automatically discover individual objects in single images just by querying our pretrained and frozen objectness network, instead of requiring human annotations for supervision. + +Regarding the objectness network, our key insight is that, given an input image or patch, it should be able to answer three essential questions: 1) is there an object inside (i.e., object existence)? 2) if so, where is it (i.e., object location/center)? and 3) what is the object shape (i.e., object boundary)? Essentially, training such an objectness network would be analogous to the learning process of infants forming concepts of objects in their minds. As illustrated in Figure 1, we can see that there is no salient object in image #1, but images #2/#3 contain similar dogs at different locations, whereas image #4 has another object with different shape boundaries. By training on such images, our objectness network aims to explicitly capture these top-down (existence/location) and bottom-up (boundary) object-centric representations. To achieve this goal, we introduce three corresponding levels of objectness to learn in parallel: 1) a binary object existence score, 2) an object center field, and 3) an object boundary distance field, as shown in Figure 2. + +With respect to the multi-object reasoning module, we aim to discover as many individual objects as possible in scene-level images. Our insight is that, given a multi-object image, if a cropped patch has a single valid object inside, its three levels of objectness representations must satisfy a certain threshold when querying against our pretrained objectness network. Otherwise, that patch should be discarded or its position and size should be effectively updated until a valid object is included inside. To this end, we introduce a center- + +boundary-aware reasoning algorithm to iteratively regress accurate multi-object bounding boxes and masks according to the learned three levels of object-centric representations from our pretrained objectness network. Notably, the multi-object reasoning is completely network-free and requires no human labels for supervision. + +Our framework, named unMORE, learns object-centric representations through the objectness network, enabling unsupervised multi-object reasoning on single images. Our contributions are: + +- We introduce a new pipeline comprising object-centric learning and multi-object reasoning, and propose three levels of explicit object-centric representations, including object existence, object center field, and object boundary distance field learned by an objectness network. +- We design a center-boundary aware reasoning algorithm to iteratively discover multiple objects in single images. The algorithm is network-free and human-label-free. +- We demonstrate superior object segmentation results and clearly surpass state-of-the-art unsupervised methods on 6 benchmark datasets including the challenging COCO. + +# 2. Related Work + +Object-centric Learning without Pretrained Features: Object-centric learning involves the unsupervised discovery of multiple objects in a scene. A plethora of methods have been proposed in the past years (Yuan et al., 2023). They primarily rely on an image reconstruction objective to learn objectness from scratch without needing any human labels or pretrained image features. Early models aim to learn object factors such as size, position, and appearance from raw images by training (variational) autoencoders (AE/VAE) (Kingma & Welling, 2014), including AIR (Eslami et al., 2016), SPACE (Lin et al., 2020), and others (Greff et al., 2016; 2017; Crawford & Pineau, 2019; Burgess et al., 2019; Greff et al., 2019). Recently, with the success of slot-based methods (Locatello et al., 2020; Engelcke et al., + +2020), most succeeding works (Engelcke et al., 2021; Sajjadi et al., 2022; Löwe et al., 2022; Biza et al., 2023; Löwe et al., 2023; Foo et al., 2023; Brady et al., 2023; Jia et al., 2023; Stanic et al., 2023; Lachapelle et al., 2023; Kirilenko et al., 2024; Gopalakrishnan et al., 2024; Wiedemer et al., 2024; Didolkar et al., 2024; Mansouri et al., 2024; Kori et al., 2024a;b; Jung et al., 2024; Fan et al., 2024) extend the slot structure from various aspects to improve the object segmentation performance. Although achieving excellent results, they often fail to scale to complex real-world images as investigated in (Yang & Yang, 2022; 2024). To overcome this limitation, a line of works (Weis et al., 2021) use additional information such as motion and depth to identify objects. Unfortunately, this precludes learning on most real-world images, which do not have motion or depth information. + +Object-centric Learning with Pretrained Features: Very recently, with the advancement of self-supervised learning techniques, strong object semantic and localization hints emerge from these features, like DINO/v2 (Caron et al., 2021; Oquab et al., 2023) pretrained on ImageNet (Deng et al., 2009) without any annotation. An increasing number of methods leverage such features for unsupervised salient/single object detection (Voynov et al., 2021; Shin et al., 2022a; Tian et al., 2024), or multi-object segmentation (Simeoni et al., 2024), or video object segmentation (Aydemir et al., 2023; Zadaianchuk et al., 2024). Representative works include the early LOST (Simeoni et al., 2021), ODIN (Henaff et al., 2022), TokenCut (Wang et al., 2022b), and the recent DINOSAUR (Seitzer et al., 2023), CutLER (Wang et al., 2023a), and UnSAM (Wang et al., 2024). These methods and their variants (Wang et al., 2022a; Singh et al., 2022; Ishtiak et al., 2023; Wang et al., 2023c;b; Niu et al., 2024; Zhang et al., 2024) achieve very promising object segmentation results on challenging real-world datasets, demonstrating the value of pretrained features. However, they still fail to discover a satisfactory number of objects, and the estimated object bounding boxes and masks often suffer from under-segmentation issues. Essentially, this is because these methods tend to simply group pixels with similar features (obtained from pretrained models) as a single object, lacking the ability to discern boundaries between objects. As a consequence, for example, they usually group two chairs nearby into just one object. By contrast, our introduced three level object-centric representations are designed to jointly retain unique and explicit objectness features for each pixel, i.e., how far away to the object boundary and in what direction to the object center. + +Object-centric Representations: To represent objects for downstream tasks such as detection, segmentation, matching, and reconstruction, various properties can be used, including object center/centroid, object binary mask (Cai & Vasconcelos, 2018; Cheng et al., 2022), and object boundary (Park et al., 2019). For example, prior works + +(Gall & Lempitsky, 2009; Gall et al., 2011; Qi et al., 2019; Ahn et al., 2019) learn to transform pixels/points to object centroids for better segmentation, and the works (Thanh Nguyen, 2014; Ma et al., 2010) use object boundaries as the template for shape matching. However, these works are primarily designed for fully supervised tasks, whereas we focus on learning object-centric representations for unsupervised multi-object segmentation. In particular, our carefully designed three-level object-centric representations aim to jointly describe objects in a nuanced manner, and our unique multi-object reasoning module can make full use of the learned object-centric representations to tackle under- and over-segmentation issues. + +# 3. unMORE + +# 3.1. Preliminary + +Our objectness network aims to learn three levels of object-centric representations from the large-scale ImageNet dataset. Thanks to the advanced self-supervised learning techniques, which give us semantic and location information of objects in pretrained models, we opt to use pretrained features to extract object regions on ImageNet to bootstrap our objectness network. + +In particular, we exactly follow the VoteCut method proposed in CuVLER (Arica et al., 2024) to obtain a single object mask (binary) on each image of ImageNet. First, each image of ImageNet is fed into the self-supervised pretrained DINO/v2, obtaining patch features. Second, an affinity matrix is constructed based on the similarity of patch features, followed by Normalized Cut (Shi & Malik, 2000) to obtain multiple object masks. Third, the most salient mask of each image is selected as the rough foreground object. For more details, refer to CuVLER. These rough masks will be used to learn our object-centric representations in Section 3.2. + +# 3.2. Objectness Network + +With single object images and the prepared (rough) masks on ImageNet (the object image denoted as $I \in \mathcal{R}^{H \times W \times 3}$ , object mask as $M \in \mathcal{R}^{H \times W \times 1}$ ), the key to train our objectness network is the definitions of three levels of object-centric representations, which are elaborated as follows. + +Object Existence Score: For an image $I$ , its object existence score $f^e$ is simply defined as 1 (positive sample) if it contains a valid object, i.e., $\text{sum}(M) >= 1$ , and 0 otherwise (negative sample). In the preliminary stage of processing ImageNet, since every image contains a valid object, we then create a twin negative sample by cropping the largest rectangle on background pixels excluding the tightest object bounding box. As illustrated in Figure 2 (a), image #1 is an original sample from ImageNet, whereas image #2 is a twin negative sample created by us. + +Object Center Field: For an image $\mathbf{I}$ with a valid object mask $M$ inside, its object center field $f^c$ is designed to indicate the position/center of the object, i.e., the tightest object bounding box center. As illustrated in Figure 2(b), each pixel within the object mask is assigned with a unit vector pointing to the object center $[C_h, C_w]$ , and pixels outside the mask are assigned zero vectors. Formally, the center field value at the $(h, w)^{th}$ pixel, denoted as $f_{(h,w)}^c$ , is defined as follows, where $f^c \in \mathcal{R}^{H \times W \times 2}$ . Basically, this center field aims to capture the relative position of an object with respect to the pixels of an image. + +$$ +\boldsymbol {f} _ {(h, w)} ^ {c} = \left\{ \begin{array}{l l} \frac {[ h , w ] - [ C _ {h} , C _ {w} ]}{\| [ h , w ] - [ C _ {h} , C _ {w} ] \|}, & \text {i f} M _ {(h, w)} = 1 \\ [ 0, 0 ], & \text {o t h e r w i s e} \end{array} \right. \tag {1} +$$ + +We notice that prior works use Hough Transform to transform pixels/points to object centroids for 2D/3D object detection (Gall et al., 2011; Qi et al., 2019), which requires learning both directions and distances to object centers. However, our object center field is just defined as unit directions pointing to object centers, as we only need to learn such directions to identify multi-center proposals instead of recovering object masks as detailed in Step #2 of Sec 3.3. + +Object Boundary Distance Field: For the same image $\pmb{I}$ and its object mask $M$ , this boundary distance field $\pmb{f}^b$ is designed to indicate the shortest distance from each pixel to the object boundary. To discriminate whether a pixel is inside or outside of an object, we first compute the simple signed distance field, where the distance values inside the object mask are assigned to be positive, those outside are negative, and boundary pixels are zeros. This signed distance field is denoted as $S \in \mathcal{R}^{H \times W \times 1}$ for the whole image, and its value at the $(h, w)^{th}$ pixel $S_{(h, w)}$ is calculated as follows: + +$$ +S _ {(h, w)} = \left\{ \begin{array}{l l} \| [ h, w ] - [ \bar {h}, \bar {w} ] \|, & \text {i f} M _ {(h, w)} = 1 \\ - \| [ h, w ] - [ \bar {h}, \bar {w} ] \|, & \text {o t h e r w i s e} \end{array} \right. \tag {2} +$$ + +where the location $(\bar{h},\bar{w})$ is the nearest pixel position on the object boundary corresponding to the pixel $(h,w)$ . Detailed steps of calculation are in Appendix A.1. These signed distance values are measured by the number of pixels and could vary significantly across images with differently-sized objects. Notably, the maximum signed distance value within an object mask $M$ , assuming it appears at the $(\hat{h},\hat{w})^{th}$ pixel location, i.e., $S_{(\hat{h},\hat{w})} = \max(S*M)$ , indicates the object size. The higher $S_{(\hat{h},\hat{w})}$ , the more likely the object is larger or its innermost pixel is further away from the boundary. + +To stabilize the training process, we opt to normalize signed distance values as our object boundary distances. Notably, signed distances for foreground and background are normalized separately. For the $(h,w)^{th}$ pixel, our object boundary + +distance field, denoted as $\pmb{f}_{(h,w)}^{b}$ , is defined as follows: + +$$ +\boldsymbol {f} _ {(h, w)} ^ {b} = \left\{ \begin{array}{l l} \frac {S _ {(h , w)}}{\operatorname* {m a x} (\boldsymbol {S} * \boldsymbol {M})}, & \text {i f} \boldsymbol {M} _ {(h, w)} = 1 \\ \frac {S _ {(h , w)}}{| \operatorname* {m i n} (\boldsymbol {S} * (\boldsymbol {1} - \boldsymbol {M})) |}, & \text {o t h e r w i s e} \end{array} \right. \tag {3} +$$ + +where $*$ represents element-wise multiplication and $f^b\in \mathcal{R}^{H\times W\times 1}$ . Figure 2(c) shows an example of an object image and its final boundary distance field. Our above definition of the boundary distance field has a nice property that the maximum signed distance value $S_{(\hat{h},\hat{w})}$ can be easily recovered based on the norm of the gradient of $f^b$ at any pixel inside of object as follows. This property is crucial to quickly search for object boundaries at the stage of multi-object reasoning as discussed in Section 3.3. + +$$ +S _ {(\hat {h}, \hat {w})} = 1 / \left\| \left[ \frac {\partial f _ {(h , w)} ^ {b}}{\partial h}, \frac {\partial f _ {(h , w)} ^ {b}}{\partial w} \right] \right\|, \quad \text {i f} f _ {(h, w)} ^ {b} > 0 \tag {4} +$$ + +Notably, the concept of the boundary distance field (Park et al., 2019; Xie et al., 2022) is successfully used for shape reconstruction. Here, we demonstrate its effectiveness for object discovery. + +Overall, for all original images of ImageNet, three levels of object-centric representations are clearly defined based on the generated rough object masks in Section 3.1. We also create twin negative images with zero existence scores. + +Objectness Network Architecture and Training: Having the defined representations on images, we simply choose two commonly-used existing networks in parallel as our objectness network, particularly, using ResNet50 (He et al., 2016) as a binary classifier to predict object existence scores $\tilde{f}^e$ , and using DPT-large (Ranftl et al., 2021) followed by two CNN-based heads to predict object center field $\tilde{\pmb{f}}^c$ and object boundary distance field $\tilde{\pmb{f}}^b$ respectively. To train the whole model, the cross-entropy loss is applied for learning existence scores, L2 loss for the center field, and L1 loss for the boundary distance field. Our total loss is defined as follows and more details are provided in Appendix A.2. + +$$ +\ell = C E \left(\tilde {f} ^ {e}, f ^ {e}\right) + \ell_ {2} \left(\tilde {\boldsymbol {f}} ^ {c}, \boldsymbol {f} ^ {c}\right) + \ell_ {1} \left(\tilde {\boldsymbol {f}} ^ {b}, \boldsymbol {f} ^ {b}\right) \tag {5} +$$ + +# 3.3. Multi-Object Reasoning Module + +With the objectness network well-trained on ImageNet, our ultimate goal is to identify as many objects as possible on complex scene images without needing human labels for supervision. Given a single scene image, a naive solution is to endlessly crop many patches with different resolutions at different locations, and then feed them into our pretrained objectness network to verify each patch's objectness. However, this is inefficient and infeasible in practice. To this end, we introduce a network-free multi-object reasoning module consisting of the following steps. + +![](images/1c499d54426c1adaa4efa5bff7e3986a427ad1070df69653c9fe86be1ecea6ed.jpg) +Figure 3: An illustration of kernel-based operation for multi-center detection and proposal splitting. + +Step #0 - Initial Object Proposal Generation: Given a scene image $\mathcal{I} \in \mathcal{R}^{M \times N \times 3}$ , we randomly and uniformly initialize a total of $T$ bounding box proposals by selecting a set of anchor pixels on the entire image. At each anchor pixel, multiple sizes and aspect ratios are chosen to create initial bounding boxes. More details are provided in Appendix A.3. For each proposal $P$ , its top-left and bottom-right corner positions in the original scene image will always be tracked and denoted as $[P^{u_1}, P^{v_1}, P^{u_2}, P^{v_2}]$ . We also linearly scale up or down all proposals to have the same resolution of $128 \times 128$ to feed into our objectness network subsequently. + +Step #1 - Existence Checking: For each bounding box proposal $P$ , we feed the corresponding image patch (cropped from $\mathcal{I}$ ) into our pretrained and frozen objectness network, querying its existence score $f_{p}^{e}$ . The proposal will be discarded if $f_{p}^{e}$ is smaller than a threshold $\tau^{e}$ . The higher $\tau^{e}$ is, the more aggressive it is to ignore potential objects. + +Step #2 - Center Reasoning: For the proposal $P$ with a high enough object existence score, we then query its center field $f_{p}^{c}$ from our objectness network. This step #2 aims to evaluate whether $f_{p}^{c}$ has only one center or $\geq 2$ centers. If there is just one center, the non-zero center field vectors of $f_{p}^{c}$ are likely pointing to a common position. Otherwise, those vectors are likely pointing to multi-positions. In the latter case, the proposal $P$ needs to be safely split into subproposals at pixels whose center field vectors are facing opposite directions. Thanks to this nice property, we propose the following simple kernel-based operation for multi-center detection and proposal splitting. + +As shown in the left block of Figure 3, given the center field $\pmb{f}_p^c \in \mathcal{R}^{128 \times 128 \times 2}$ of a proposal $P$ , we predefine a kernel $\mathbb{K} \in \mathbf{R}^{5 \times 5 \times 2}$ where each of the $(5 \times 5)$ vectors has a unit length and points outward against the kernel center. Details of kernel values are in Appendix A.3. By applying this kernel on top of $\pmb{f}_p^c$ with a stride of $1 \times 1$ and zero-paddings, we obtain an anti-center map, denoted as $\pmb{f}_p^{ac} \in \mathcal{R}^{128 \times 128 \times 1}$ . The higher the anti-center value at a specific pixel, the more likely that pixel is in between multiple crowded objects. Otherwise, that pixel is more likely to be near an object center or belongs to the background. Clearly, the former case is more likely to incur under-segmentation. + +For this anti-center map $\pmb{f}_p^{ac}$ of the proposal $P$ , 1) if its highest value among all pixels is greater than a threshold + +$\tau^c$ , this proposal $P$ is likely to have $\geq 2$ crowded objects and will be split at the corresponding pixel location with the highest value. As shown in the right block of Figure 3, we safely split the proposal $P$ into 4 subproposals at the highest anti-center value (yellow star): $\{left, right, upper, lower\}$ halves. Each subproposal is regarded as a brand-new one and will be evaluated from Step #1 again. With this design, the particularly challenging under-segmentation issue often incurred by multiple crowded objects can be resolved. + +2) If the highest value of $f_{p}^{ac}$ is smaller than the threshold $\tau^c$ , the proposal $P$ is likely to have just one object, or multiple objects but they are far away from each other, i.e., more than 5 pixels apart. In this regard, we simply adopt the connected-component method used in CuVLER (Arica et al., 2024) to split the proposal $P$ into subproposals. Specifically, for its center field $f_{p}^{c}$ , all pixels that are spatially connected and have non-zero unit vectors are grouped into one subproposal. Each subproposal is regarded as a brand-new one and will be evaluated from Step #1 again. + +Step #3 - Boundary Reasoning: At this step, the proposal $P$ is likely to have a single object, and we query its boundary distance field $f_{p}^{b}$ from our objectness network. The ultimate goal of this step is to correctly update this proposal's location and size, i.e., the two corner positions $[P^{u_1}, P^{v_1}, P^{u_2}, P^{v_2}]$ in its original scene image $\mathcal{I}$ , such that the proposal can converge to a tight bounding box of the object inside. Recall that, in Equations 3&4, our definition of the boundary distance field and its gradient have a crucial property. Particularly, the value at a specific pixel of the boundary distance field $f_{p}^{b}$ indicates how far it is away from the nearest object's boundaries. This means that we can directly use $f_{p}^{b}$ to help update the two corner positions. + +Intuitively, if the proposal $P$ has an incomplete object, its borders need to expand. If it has many background pixels, its borders need to contract. With this insight, we only need to focus on boundary distance values of the four borders of $f_{p}^{b}$ to decide the margins to expand or contract. To this end, we introduce the following border-based reasoning algorithm to update $[P^{u_1}, P^{v_1}, P^{u_2}, P^{v_2}]$ . + +As illustrated in Figure 4, for the boundary distance field $\pmb{f}_p^b \in \mathcal{R}^{128 \times 128 \times 1}$ of a proposal $P$ , we first collect values at four borders {topmost row, leftmost column, bottommost row, rightmost column} highlighted by red dotted lines, denoted by four vectors: $\{\pmb{f}_{p_t}^b, \pmb{f}_{p_l}^b, \pmb{f}_{p_b}^b, \pmb{f}_{p_r}^b\} \in \mathcal{R}^{128}$ . Each of the + +![](images/500354a3cd107259901094cefeb76553475cecbedd2e0d89d1c5f74aa4fe32c2.jpg) +Figure 4: An illustration of border-based reasoning algorithm to update proposals. + +four borders of proposal $P$ is updated as follows: + +$$ +P ^ {u _ {1}} \leftarrow P ^ {u _ {1}} - \frac {\operatorname* {m a x} \left(\boldsymbol {f} _ {p _ {t}} ^ {b}\right)}{\left\| \left[ \frac {\partial \boldsymbol {f} _ {p _ {t}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {t}} ^ {b}}{\partial v} \right] \right\|}, (u, v) = \operatorname * {a r g m a x} \boldsymbol {f} _ {p _ {t}} ^ {b} \tag {6} +$$ + +$$ +P ^ {v _ {1}} \leftarrow P ^ {v _ {1}} - \frac {\operatorname* {m a x} (\boldsymbol {f} _ {p _ {l}} ^ {b})}{\| [ \frac {\partial \boldsymbol {f} _ {p _ {l}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {l}} ^ {b}}{\partial v} ] \|}, (u, v) = \operatorname * {a r g m a x} \boldsymbol {f} _ {p _ {l}} ^ {b} +$$ + +$$ +P ^ {u _ {2}} \gets P ^ {u _ {2}} + \frac {m a x (\pmb {f} _ {p _ {b}} ^ {b})}{\left\| [ \frac {\partial \pmb {f} _ {p _ {b}} ^ {b}}{\partial u} , \frac {\partial \pmb {f} _ {p _ {b}} ^ {b}}{\partial v} ] \right\|}, (u, v) = a r g m a x \pmb {f} _ {p _ {b}} ^ {b} +$$ + +$$ +P ^ {v _ {2}} \gets P ^ {v _ {2}} + \frac {\operatorname* {m a x} (\pmb {f} _ {p r} ^ {b})}{\| [ \frac {\partial \pmb {f} _ {p r} ^ {b}}{\partial u} , \frac {\partial \pmb {f} _ {p r} ^ {b}}{\partial v} ] \|}, (u, v) = \arg \operatorname * {m a x} \pmb {f} _ {p r} ^ {b} +$$ + +Because $\{max(\pmb{f}_{pt}^{b}), max(\pmb{f}_{pl}^{b}), max(\pmb{f}_{pb}^{b}), max(\pmb{f}_{pr}^{b})\}$ could be positive or negative, this results in the four borders of the proposal $P$ expanding or contracting by themselves. As shown in the rightmost block of Figure 4, the proposal $P$ is updated from the blue rectangle to the yellow one whose bottom and right borders expand to include more object parts because their maximum boundary distance values are positive, whereas its top and left borders contract to exclude more background pixels because their maximum boundary distance values are negative. As boundary distance values are physically meaningful, each expansion step will not go far outside of the tightest bounding box and each contraction step will not go deep into the tightest bounding box. + +Among the total four steps, the center-boundary-aware reasoning Steps #2/#3 are crucial and complementary to tackle the core under-/over-segmentation issues. Once the two corners of a proposal $P$ are updated, we will feed the updated proposal into Step #3 until the corners converge to stable values. During this iterative updating stage, we empirically find that it is more efficient to take a slightly larger step size for expansion, and a smaller step size for contraction. More details are in Appendix A.3. The efficiency of our direct iterative updating is also investigated in Appendix A.18. + +Once the size and location of a proposal $P$ converge, a valid object is discovered. After all proposals are processed in parallel through Steps #1/#2/#3, we collect all bounding boxes and apply the standard NMS to filter out duplicate detections. For each final bounding box, we obtain its object mask by taking the union of positive values within its boundary distance field and non-zero vectors within its center field. We also compute a confidence score for each object based on its object existence score, center field, and boundary distance field. More details are in Appendix A.4. + +Overall, with the pretrained objectness network in Section 3.2, and the network-free multi-object reasoning module in Section 3.3, our pipeline can discover multiple objects in single scene images without training an additional detector. This pipeline is named as $\mathbf{unMORF}_{disc}$ in experiments. + +Optionally Training a Detector: As shown in CutLER (Wang et al., 2023a) and CuVLER (Arica et al., 2024), the discovered objects from scene images can be used as pseudo labels to train a separate class agnostic detector (CAD) from scratch. We select and weight each discovered object based on its confidence score. Intuitively, the selected objects should have high object existence scores, homogeneous center fields and boundary fields. More details about the pseudo label selection are provided in Appendix A.5. + +Following CuVLER, we also train a class agnostic detector using the same network architecture and training strategy based on our own pseudo labels from scratch. Our trained detector is named as unMORE in experiments. + +# 4. Experiments + +Datasets: Evaluation of existing unsupervised multi-object segmentation methods is primarily conducted on the challenging COCO validation set (Lin et al., 2014). However, we empirically find that a large number of objects are actually not annotated in validation set. This may not be an issue for evaluating fully-supervised methods in literature, but likely gives an inaccurate evaluation of unsupervised object discovery. To this end, we further manually augment object annotations of COCO validation set by labelling additional 197 object categories. It is denoted as COCO* validation set and will be released to the community. Details of the additional annotations are in Appendix A.16. We also evaluate on datasets of COCO20K (Lin et al., 2014), LVIS (Gupta et al., 2019), VOC (Everingham et al., 2010), KITTI (Geiger et al., 2012), Object365 (Shao et al., 2019), and OpenImages (Kuznetsova et al., 2020). + +Baselines: For an extensive comparison on COCO* validation set, we include the following three groups of methods. + +Group 1 - Direct Object Discovery w/o Learnable Modules. The following methods directly discover objects from COCO* val set, without involving any trainable modules. + +- FreeMask: proposed in FreeSOLO (Wang et al., 2022a) to discover multi-objects based on DenseCL features. + +Table 1: Quantitative results on COCO* val set. "# of pred obj." refers to the average number of predicted objects per image. + +
Trainable Module\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{mask}\)\(AP_{50}^{mask}\)\(AP_{75}^{mask}\)\(AP_{100}^{mask}\)\(AR_{100}^{mask}\)\(AR_{100}^{mask}\)# of pred obj.
Direct Object Discoveryw/o Learnable ModulesFreeMask-3.70.61.34.64.63.10.30.93.53.53.7
MaskCut (K=3)-6.02.42.96.76.75.11.82.35.85.81.8
MaskCut (K=10)-6.22.62.97.27.25.32.02.36.26.22.1
VoteCut-10.84.95.511.311.39.54.04.69.89.88.9
w/ Learnable ModulesDINOSAURRecon. SlotAtt2.00.20.64.84.81.10.10.32.92.97.0
FOUNDSeg. Head4.41.82.13.63.63.31.31.53.03.01.0
unMOREdisc(Ours)Obj. Net19.19.010.119.619.617.88.79.518.918.98.2
Training Detectors-UnSAMDetector x 410.26.36.436.150.110.26.26.334.146.1332.2
CutLERDetector x 326.014.214.737.937.922.711.211.832.732.7100.0
CuVLERDetector x 228.014.815.537.837.824.411.712.632.132.199.7
unMORE(Ours)Obj. Network + Detector x 132.617.218.040.940.929.614.415.536.536.5100.0
+ +![](images/fb40e2ff05b154b63072dfff74a1c14443f8dcb61127292db4f1055ee27a25fb.jpg) +Image + +![](images/be2f1a5d63596fedf89514999f4706b16ff1ce31b0da0cd54b7b972af3690c71.jpg) +$\mathrm{COCO^{*}GT}$ + +![](images/f584208b7cf76389979eb4fcbb2266ec70d30a1a86128a07954e936f562c8325.jpg) +DINO s16 + +![](images/51b0cdc760c4be9678af58bbd298c19b176f7e774cfd6615f9c15cf0691464b2.jpg) +DINO b16 + +![](images/a876a65cb62965f8e8b2d6d6c348bbdc8c87832063533ddae1e50d1c4c38abcd.jpg) +DINO s8 + +![](images/30c42f62a11f69385826ac44e61150abe90ab93701224cb848b362d06f96f8d4.jpg) +DINO b8 + +![](images/6b4694643c894fea58eec0005ac23e17e5dce051c4f1c5099839ed3293b40e20.jpg) +DINOy2 s14 + +![](images/5c092927f26c5403a49a2100c929f096c023a11d8865d5105c89e6834f11cd2d.jpg) +DINOv2 b14 + +![](images/5c210614502a8934be9930b51455e28c95a95859fa1d116d51c17e60e8c6d243.jpg) +MaskCut $(\mathrm{K} = 10)$ + +![](images/388e120809ea975e2ddc4560814f9f62441ed57f5081b41b056f62415c465953.jpg) +VoteCut + +![](images/e6a56c99ed565871e30838fd5ab40a08121d588c7fd73dc0f5d17e62bd969d52.jpg) +Center Field + +![](images/05ea43d49a9cfc3767ab96bf5aa9492e4152f0685cb57614b77e6165fd0aa474.jpg) +UNMORED (Ours) + +![](images/48869410505351a573e871fe53aedf8e0d461e31515dbf6f4180a433b58bdcf5.jpg) +Boundary Distance Field +Figure 5: Results on COCO* validation set. For MaskCut and VoteCut, the eigenvectors of the second smallest eigenvalue for their used DINO/v2 features are visualized. For unMORE $_{disc}$ , center and boundary object representations are visualized. + +- MaskCut: proposed in CutLER (Wang et al., 2023a) to discover multi-objects based on DINO features. The number of cut $K$ is set as both 3 and 10 in its favor. +- VoteCut: proposed in CuVLER (Arica et al., 2024) to discover multi-objects based on DINO/v2 features. + +Group 2 - Direct Object Discovery w/ Learnable Modules. The following methods use learnable modules to aid object discovery, but without training any multi-object detector. + +- DINOSAUR (Seitzer et al., 2023): It discovers multiobjects by learning to reconstruct DINO features. +- FOUND (Simeoni et al., 2023): This is a salient object detection method. +- unMOREdisc (Ours): We discover multi-objects by network-free reasoning through our objectness network. + +Group 3 - Object Segmentation by Training Additional Multi-object Detectors. The following methods discover objects by training additional detectors. We adopt a diverse range of settings for each method and report the highest scores from their best setting. A full list of all settings and results are in Appendix A.9. Note that, all final evaluation is conducted on $\mathrm{COCO}^*$ val set which is completely held out. + +- CutLER: Its best setting is to train detectors on pseudo labels generated by MaskCut on ImageNet train set. As mentioned in the original paper, its training stage takes 3 rounds where each round uses the detector of the previous round to infer on ImageNet train set as new pseudo labels. +- UnSAM (Wang et al., 2024): Its best setting is to train detectors on pseudo objects discovered by MaskCut on ImageNet train set for 3 rounds in the same way as CutLER. The final detector is used to infer on SA-1B train set. Another Mask2Former is trained on these pseudo labels. + +- CuVLER: Its best setting is to first train a detector on pseudo labels generated by VoteCut on ImageNet train set, and then train a new detector on pseudo labels inferred from the trained detector on the COCO train set.. +- unMORE (Ours): We just train a single detector on two groups of pseudo labels: one group from our discovered objects on COCO train set, another from object pseudo labels generated by VoteCut on ImageNet train set. + +# 4.1. Multi-object Segmentation Results on COCO* + +Table 1 shows AP/AR scores of all methods at different thresholds for object bounding boxes and masks. + +Results and Analysis of Methods in Group 1: From rows 1-4 of Table 1, we can see that MaskCut and VoteCut which utilize DINO/v2 features can achieve preliminary performance. The middle block of Figure 5 shows qualitative results of MaskCut and VoteCut together with their used DINO/v2 features for grouping objects. Basically, these baselines mainly rely on grouping similar per-pixel features (obtained from pretrained DINO/v2) as objects, resulting in multiple similar objects being grouped as just one, as shown in Figure 5 where two cabinets are detected as one. + +Results and Analysis of Methods in Group 2: From rows 5-7 of Table 1, we can see that our unMORE $_{disc}$ surpasses DINOSAUR and FOUND which are even inferior to feature similarity based methods in Group 1, meaning that reconstruction may not be a good object-centric grouping strategy and saliency maps may be misaligned with objectness. + +Regarding our unMORE $_{disc}$ , the right block of Figure 5 visualizes the learned center field and boundary distance + +![](images/2c3352ba3660b9a4c680ec5f83b221e8ceab52de09e771c80ecac97ccd32c00f.jpg) +Figure 6: Qualitative results of Direct Object Discovery w/o CAD on COCO* val set as discussed in Sec 4.1 Groups 1&2. + +![](images/d424e794bb541ecc662f174d99328fd79b4564307512bec7d04477f932b25a0c.jpg) +Figure 7: Qualitative results from trained detectors on COCO* val set as discussed in Sec 4.1 Group 3. + +field, which allows us to easily discover individual objects, especially in crowded scenes. This is also verified by qualitative results presented in Figure 6. To further validate this insight, we separately calculate scores on images with more than 5/10/15 ground truth objects respectively in Table 5 of Appendix A.8. Our method maintains high scores on crowded images, whereas baselines collapse. Results on the original COCO val set (fewer annotations) are in Appendix A.10. More analysis is in Appendix A.17. + +Results and Analysis of Methods in Group 3: From rows 8-11 of Table 1 and Figure 7, we can see that: 1) Our method clearly surpasses all methods by a large margin and achieves state-of-the-art performance. 2) Both CutLER and CuVLER can achieve reasonable results because additional detectors are likely to discover more objects. 3) The latest UnSAM appears to be incapable of identifying objects precisely, although it has a rather high AR score when its detector is + +trained on the large-scale SA-1B dataset from SAM (Kirillov et al., 2023). Results on the original COCO validation set (fewer annotations) are provided in Appendix A.10. + +# 4.2. Zero-shot Detection Results + +For each method, we select its best-performing detector in Group 3 of Sec 4.1 and directly test it on another 6 datasets: COCO20K/ LVIS/ VOC/ KITTI/ Object365/ OpenImages. As shown in Table 2 and Figure 8, unMORE achieves the highest accuracy on all datasets across almost all metrics, showing our generalization in zero-shot detection. + +We also note that, though our method achieves good performance for zero-shot detection on natural images, its capability is likely restricted by the learned objectness in training data. For data with significant domain gaps (e.g., medical images), object priors from natural images may not apply. + +Table 2: Quantitative results of zero-shot detection. Each method uses its best model in Group 3. Since KITTI/ VOC Object365/ OpenImages datasets do not have ground truth masks, only bounding box metrics are calculated. + +
COCO20KLVIS
\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{mask}\)\(AP_{75}^{mask}\)\(AP_{100}^{mask}\)\(AR_{100}^{mask}\)\(AR_{100}^{mask}\)\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{mask}\)\(AP_{75}^{mask}\)\(AP_{100}^{mask}\)\(AR_{100 }^{mask}\)\(AR_{100}^{mask}\)
UnSAM6.33.23.429.742.56.33.13.327.538.04.42.52.723.135.74.52.82.822.934.2
CutLER22.411.912.533.133.119.69.210.027.227.28.53.94.521.821.86.73.23.518.718.7
CuVLER24.112.313.132.632.621.19.710.727.227.28.94.14.720.820.87.23.43.817.917.9
unMORE (Ours)25.913.013.935.435.423.611.112.030.530.510.45.05.624.124.18.94.54.921.421.4
KITTIVOCObject365OpenImages
\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100 }^{box}\)\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{box}\)
UnSAM1.90.60.817.021.75.12.32.638.851.99.14.95.330.547.96.63.74.034.648.7
CutLER20.87.49.528.928.936.819.320.244.044.021.710.311.534.234.217.29.59.729.629.6
CuVLER18.85.98.027.927.939.420.121.543.743.721.99.410.932.532.518.611.311.429.829.8
unMORE (Ours)26.712.613.734.834.840.421.522.747.447.424.711.012.435.935.919.010.911.229.529.5
+ +![](images/d810fc63823777babbc9793571b8d30669adcf830bd676bfe7eeaa726cd48d25.jpg) +Figure 8: Qualitative results for zero-shot detection as discussed in Sec 4.2. + +![](images/ecad5710d36444af84f5114a81357cff4708828709a8110d9f51c2d1267bcefa.jpg) + +![](images/9009611414afc7f4b90e12349e3c4907bd41d7a322acdae765e71bfa88fe864c.jpg) + +![](images/f3e91694d37aacaa9bad91740e7625740de3ce530a95131565e9c6928d90b214.jpg) + +# 5. Ablations + +We explore various combinations of these representations to train an objectness network, which then discovers objects as pseudo labels for the final detector. Details of ablation settings are in Appendix A.11. + +Table 3: Ablation results of different choices for object-centric representations on COCO* val set. + +
Object ExistenceObject Center FieldObject Boundary Distance Field\( {\mathrm{{AP}}}_{50}^{\text{box }} \)\( {\mathrm{{AP}}}_{75}^{\text{box }} \)\( {\mathrm{{AP}}}_{\mathrm{{BOX}}}^{\mathrm{{box}}}{\mathrm{{AR}}}_{100}^{\mathrm{{box}}}{\mathrm{{AR}}}_{\text{box }}^{\mathrm{{box}}} \)\( {\mathrm{{AP}}}_{50}^{\text{mask }} \)\( {\mathrm{{AP}}}_{75}^{\text{mask }} \)\( {\mathrm{{AP}}}_{\mathrm{{BOX}}}^{\mathrm{{mask}}}{\mathrm{{AR}}}_{100}^{\mathrm{{mask}}}{\mathrm{{AR}}}_{\text{mask }}^{\mathrm{{mask}}} \)
---23.410.711.833.833.819.68.09.435.735.7
--27.213.014.235.635.623.09.811.330.930.9
--29.214.915.837.337.325.611.813.032.532.5
-29.014.415.436.336.325.011.112.531.031.0
--30.716.116.940.740.728.113.914.837.037.0
-31.416.217.140.140.128.413.614.735.935.9
-30.116.317.040.640.628.313.914.936.836.8
32.617.218.040.940.929.614.415.536.536.5
+ +With the above ablated versions, each method generates its pseudo labels on COCO train set. Then a detector is trained on these labels together with the same pseudo labels of ImageNet train set, exactly following the setting of our full method in Group 3 of Sec 4.1. + +Results & Analysis: From Table 3, we can see that: 1) The boundary distance field yields the largest performance improvement, as it retains critical information of representing complex object boundaries, thus effectively helping discover more objects in the multi-object reasoning module. 2) Without learning object existence scores and object center fields, the AP score drops, potentially due to false positives + +or under-segmentation in spite of a high AR score achieved. 3) The commonly used binary mask is far from sufficient to retain complex object-centric representations. + +More ablations on our multi-object reasoning module, the choices of hyperparameters $\tau_{conf}^{e} / \tau_{conf}^{c} / \tau_{conf}^{b}$ , and the data augmentation for objectness network are in Appendix A.12. + +# 6. Conclusion + +In this paper, we demonstrate that multiple objects can be accurately discovered from complex real-world images, without needing human annotations in training. This is achieved by our novel two-stage pipeline comprising an object-centric representation learning stage followed by a multi-object reasoning stage. We explicitly define three levels of object-centric representations to be learned from the large-scale ImageNet without human labels in the first stage. These representations serve as a key enabler for effectively discovering multi-objects on complex scene images in the second stage. Extensive experiments on multiple benchmarks demonstrate the state-of-the-art performance of our method in multi-object segmentation. It would be interesting to extend our framework to large-scale 2D image generation, where the large pretrained generative models may further improve the quality of object-centric representations. + +# Acknowledgments + +This work was supported in part by National Natural Science Foundation of China under Grant 62271431, in part by Research Grants Council of Hong Kong under Grants 25207822 & 15225522. + +# Impact Statement + +This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. + +# References + +Ahn, J., Cho, S., and Kwak, S. Weakly supervised learning of instance segmentation with inter-pixel relations. In CVPR, pp. 2209-2218, 2019. +Arica, S., Rubin, O., Gershov, S., and Laufer, S. CuVLER: Enhanced Unsupervised Object Discoveries through Exhaustive Self-Supervised Transformers. CVPR, 2024. +Aydemir, G., Xie, W., and Guney, F. Self-supervised object-centric learning for videos. NeurIPS, 2023. +Biza, O., van Steenkiste, S., Sajjadi, M. S. M., Elsayed, G. F., Mahendran, A., and Kipf, T. Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames. ICML, 2023. +Brady, J., Zimmermann, R. S., Sharma, Y., Schölkopf, B., von Kugelgen, J., and Brendel, W. Provably Learning Object-Centric Representations. ICML, 2023. +Burgess, C. P., Matthew, L., Watters, N., Kabra, R., Higgins, I., Botvinick, M., and Lerchner, A. MONet: Unsupervised Scene Decomposition and Representation. arXiv:1901.11390, 2019. +Cai, Z. and Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018. +Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and Joulin, A. Emerging Properties in Self-Supervised Vision Transformers. ICCV, 2021. +Cheng, B., Misra, I., Schwing, A. G., Kirillov, A., and Girdhar, R. Masked-attention mask transformer for universal image segmentation. In CVPR, pp. 1290–1299, 2022. +Crawford, E. and Pineau, J. Spatially Invariant Unsupervised Object Detection with Convolutional Neural Networks. AAAI, 2019. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. CVPR, 2009. + +Didolkar, A., Goyal, A., and Bengio, Y. Cycle Consistency Driven Object Discovery. ICLR, 2024. +Engelcke, M., Kosiorek, A. R., Jones, O. P., and Posner, I. GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations. *ICLR*, 2020. +Engelcke, M., Jones, O. P., and Posner, I. GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement. NeurIPS, 2021. +Eslami, S. M. A., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, K., and Hinton, G. E. Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. NIPS, 2016. +Everingham, M., Van Gool, L., Williams, C. K., Winn, J., and Zisserman, A. The Pascal visual object classes (voc) challenge. IJCV, 2010. +Fan, K., Bai, Z., Xiao, T., He, T., Horn, M., Fu, Y., Locatello, F., and Zhang, Z. Adaptive Slot Attention: Object Discovery with Dynamic Slot Number. CVPR, 2024. +Foo, A., Hsu, W., and Lee, M. L. Multi-Object Representation Learning via Feature Connectivity and Object-Centric Regularization. NeurIPS, 2023. +Frank, M. C., Braginsky, M., Yurovsky, D., and Marchman, V. A. Wordbank: an open repository for developmental vocabulary data. Journal of Child Language, 2016. +Gall, J. and Lempitsky, V. Class-Specific Hough Forests for Object Detection. CVPR, 2009. +Gall, J., Yao, A., Razavi, N., Luc Van, L., and Lempitsky, V. Hough Forests for Object Detection, Tracking, and Action Recognition. TPAMI, 2011. +Geiger, A., Lenz, P., and Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012. +Gopalakrishnan, A., Stanic, A., Schmidhuber, J., and Mozer, M. C. Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery. arXiv:2405.17283, 2024. +Greff, K., Rasmus, A., Berglund, M., Hao, T. H., Schmidhuber, J., and Valpola, H. Tagger: Deep unsupervised perceptual grouping. NIPS, 2016. +Greff, K., Van Steenkiste, S., and Schmidhuber, J. Neural Expectation Maximization. NIPS, 2017. +Greff, K., Kaufman, R. L., Kabra, R., Watters, N., Burgess, C., Zoran, D., Matthew, L., Botvinick, M., and Lerchner, A. Multi-object representation learning with iterative variational inference. ICML, 2019. + +Greff, K., Herrmann, C., Belletti, F., Fleet, D. J., Kipf, T., Pot, E., Sela, M., Meyer, H., Beyer, L., Kundu, A., Wu, T., Rebain, D., Stone, A., Laradji, I., Zhong, F., Duckworth, D., and Liu, H.-t. D. Kubric: A scalable dataset generator. CVPR, 2022. +Gupta, A., Dollar, P., and Girshick, R. Lvis: A dataset for large vocabulary instance segmentation. In CVPR, 2019. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, 2016. +Hénaff, O. J., Koppula, S., Shelhamer, E., Zoran, D., Jaegle, A., Zisserman, A., Carreira, J., and Arandjelović, R. Object discovery and representation networks. ECCV, 2022. +Ishtiak, T., En, Q., and Guo, Y. Exemplar-FreeSOLO: Enhancing Unsupervised Instance Segmentation with Exemplars. CVPR, 2023. +Jia, B., Liu, Y., and Huang, S. Improving Object-Centric Learning with Query Optimization. ICLR, 2023. +Jung, W., Yoo, J., Ahn, S., and Hong, S. Learning to Compose: Improving Object Centric Learning by Injecting Compositionality. *ICLR*, 2024. +Karazija, L., Laina, I., and Rupprecht, C. ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation. NeurIPS, 2021. +Kingma, D. P. and Welling, M. Auto-Encoding Variational Bayes. ICLR, 2014. +Kirilenko, D., Vorobyov, V., Kovalev, A. K., and Panov, A. I. Object-Centric Learning with Slot Mixture Module. *ICLR*, 2024. +Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., et al. Segment anything. In ICCV, 2023. +Kori, A., Locatello, F., Ribeiro, F. D. S., Toni, F., and Glocker, B. Grounded Object Centric Learning. ICLR, 2024a. +Kori, A., Locatello, F., Santhirasekaram, A., Toni, F., Glocker, B., and Ribeiro, F. D. S. Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention. arXiv:2406.07141, 2024b. +Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Kolesnikov, A., et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. IJCV, 2020. + +Lachapelle, S., Mahajan, D., Mitliagkas, I., and Lacoste-Julien, S. Additive Decoders for Latent Variables Identification and Cartesian-Product Extrapolation. NeurIPS, 2023. +Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft COCO: Common Objects in Context. ECCV, 2014. +Lin, Z., Wu, Y.-F., Peri, S. V., Sun, W., Singh, G., Deng, F., Jiang, J., and Ahn, S. SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition. ICLR, 2020. +Locatello, F., Weissenborn, D., Unterthiner, T., Mahendran, A., Heigold, G., Uszkoreit, J., Dosovitskiy, A., and Kipf, T. Object-Centric Learning with Slot Attention. NeurIPS, 2020. +Lowe, S., Lippe, P., Rudolph, M., and Welling, M. Complex-Valued Autoencoders for Object Discovery. TMLR, 2022. +Lowe, S., Lippe, P., Locatello, F., and Welling, M. Rotating Features for Object Discovery. NeurIPS, 2023. +Ma, T., Yang, X., and Latecki, L. J. Boosting chamfer matching by learning chamfer distance normalization. In ECCV, pp. 450-463. Springer, 2010. +Mansouri, A., Hartford, J., Labs, V., Zhang, Y., and Bengio, Y. Object-centric architectures enable efficient causal representation learning. *ICLR*, 2024. +Niu, D., Wang, X., Han, X., Lian, L., Herzig, R., and Darrell, T. Unsupervised Universal Image Segmentation. CVPR, 2024. +Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al. Dinov2: Learning robust visual features without supervision, 2023. +Park, J. J., Florence, P., Straub, J., Newcombe, R., and Lovegrove, S. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. CVPR, 2019. +Qi, C. R., Litany, O., He, K., and Guibas, L. J. Deep Hough Voting for 3D Object Detection in Point Clouds. ICCV, 2019. +Ranftl, R., Bochkovskiy, A., and Koltun, V. Vision transformers for dense prediction. In CVPR, 2021. +Ren, S., He, K., Girshick, R., and Sun, J. Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks. NIPS, 2015. + +Sajjadi, M. S. M., Duckworth, D., Mahendran, A., van Steenkiste, S., Pavetic, F., Lucic, M., Guibas, L. J., Greff, K., and Kipf, T. Object Scene Representation Transformer. NeurIPS, 2022. +Seitzer, M., Horn, M., Zadaianchuk, A., Zietlow, D., Xiao, T., Simon-Gabriel, C.-J., He, T., Zhang, Z., Scholkopf, B., Brox, T., and Locatello, F. Bridging the Gap to Real-World Object-Centric Learning. ICLR, 2023. +Shao, S., Li, Z., Zhang, T., Peng, C., Yu, G., Zhang, X., Li, J., and Sun, J. Objects365: A large-scale, high-quality dataset for object detection. In ICCV, 2019. +Shi, J. and Malik, J. Normalized cuts and image segmentation. TPAMI, 2000. +Shin, G., Albanie, S., and Xie, W. Unsupervised Salient Object Detection with Spectral Cluster Voting. CVPRW, 2022a. +Shin, G., Albanie, S., and Xie, W. Unsupervised salient object detection with spectral cluster voting. In CVPRW, 2022b. +Simeoni, O., Puy, G., Vo, H. V., Roburin, S., Gidaris, S., Bursuc, A., Pérez, P., Marlet, R., and Ponce, J. Localizing Objects with Self-Supervised Transformers and no Labels. BMVC, 2021. +Simeoni, O., Sekkat, C., Puy, G., Vobecky, A., Zablocki, É., and Pérez, P. Unsupervised Object Localization: Observing the Background to Discover Objects. CVPR, 2023. +Simeoni, O., Zablocki, É., Gidaris, S., Puy, G., and Pérez, P. Unsupervised Object Localization in the Era of Self-Supervised ViTs: A Survey. IJCV, 2024. +Singh, G., Deng, F., and Ahn, S. Illiterate DALL-E Learns toCompose.ICLR,2022. +Stanic, A., Gopalakrishnan, A., Irie, K., and Schmidhuber, J. Contrastive Training of Complex-Valued Autoencoders for Object Discovery. NeurIPS, 2023. +Thanh Nguyen, D. A novel chamfer template matching method using variational mean field. In CVPR, pp. 2425-2432, 2014. +Tian, X., Xu, K., and Lau, R. Unsupervised Salient Instance Detection. CVPR, 2024. +Voynov, A., Morozov, S., and Babenko, A. Object Segmentation Without Labels with Large-Scale Generative Models. ICML, 2021. +Wang, X., Yu, Z., De Mello, S., Kautz, J., Anandkumar, A., Shen, C., and Alvarez, J. M. FreeSOLO: Learning to Segment Objects without Annotations. CVPR, 2022a. + +Wang, X., Girdhar, R., Yu, S. X., and Misra, I. Cut and Learn for Unsupervised Object Detection and Instance Segmentation. CVPR, 2023a. +Wang, X., Yang, J., and Darrell, T. Segment Anything without Supervision. NeurIPS, 2024. +Wang, Y., Shen, X., Hu, S., Yuan, Y., Crowley, J., and Vaufreydaz, D. Self-Supervised Transformers for Unsupervised Object Discovery using Normalized Cut. CVPR, 2022b. +Wang, Y., Shen, X., Yuan, Y., Du, Y., Li, M., Hu, S. X., Crowley, J. L., and Vaufreydaz, D. TokenCut: Segmenting Objects in Images and Videos With Self-Supervised Transformer and Normalized Cut. TPAMI, 2023b. +Wang, Z., Shou, M. Z., and Zhang, M. Object-centric Learning with Cyclic Walks between Parts and Whole. NeurIPS, 2023c. +Weis, M. A., Chitta, K., Sharma, Y., Brendel, W., Bethge, M., Geiger, A., and Ecker, A. S. Benchmarking Unsupervised Object Representations for Video Sequences. JMLR, 2021. +Wiedemer, T., Brady, J., Panfilov, A., Juhos, A., Bethge, M., and Brendel, W. Provable Compositional Generalization for Object-Centric Learning. *ICLR*, 2024. +Wu, Y., Kirillov, A., Massa, F., Lo, W.-Y., and Girshick, R. Detector2. https://github.com/facebookresearch/detectron2, 2019. +Xie, Y., Takikawa, T., Saito, S., Litany, O., Yan, S., Khan, N., Tombari, F., Tompkin, J., Sitzmann, V., and Sridhar, S. Neural Fields in Visual Computing and Beyond. Computer Graphics Forum, 2022. +Yang, Y. and Yang, B. Promising or Elusive? Unsupervised Object Segmentation from Real-world Single Images. NeurIPS, 2022. +Yang, Y. and Yang, B. Benchmarking and Analysis of Unsupervised Object Segmentation from Real-World Single Images. IJCV, 2024. +Yuan, J., Chen, T., Li, B., and Xue, X. Compositional Scene Representation Learning via Reconstruction: A Survey. TPAMI, 2023. +Zadaianchuk, A., Seitzer, M., and Martius, G. Object-centric learning for real-world videos by predicting temporal feature similarities. NeurIPS, 2024. +Zhang, X., Xie, J., Yuan, Y., Mi, M. B., and Tan, R. T. HEAP: Unsupervised Object Discovery and Localization with Contrastive Grouping. AAAI, 2024. + +# A. Appendix + +The appendix includes: + +- Details for Object-centric Representations. A.1 +- Details for Objectness Network. A.2 +- Details for Multi-object Reasoning Module. A.3 +- Details for Object Mask and Confidence Score. A.4 +- Details for Pseudo Label Process. A.5 +- Details for Detector Training. A.6 +Details for Datasets. A.7 +- More Results on COCO* Validation Set. A.8 +- Details for CAD Training Settings. A.9 +- Experiment Results on COCO Validation Set. A.10 +Details for Ablation Settings. A.11 +- More Ablation Studies. A.12 +- Time Consumption and Throughput. A.13 +- Failure Cases. A.14 +- More Qualitative Results. A.15 +- Details of COCO* Validation Set. A.16 +- More Results and Analysis of Object-centric Representations. A.17 +Number of Iterations for Proposal Optimization. A.18 + +# A.1. Details for Object-centric Representations + +Calculation of Signed Distance Field. Given a binary mask $M \in \mathcal{R}^{H \times W \times 1}$ , we calculate the distance from each pixel to its closest boundary point with distanceTransform() function in the opencv library (https://docs.opencv.org/4.x/d7/d1b/groupimgproc_misc.html). The function takes a binary mask as input and computes the shortest path length to the nearest zero pixel for all non-zero pixels. Thus, we first compute the distance field within the object, denoted as $S_{obj}$ , using the object binary mask $M$ . Then, we compute the distance field within the background, denoted as $S_{bg}$ , using $(1 - M)$ . The signed distance field for the whole image is $S = S_{obj} - S_{bg}$ . Specifically, when using distanceTransform(), we set the distance type as L2 (Euclidean distance) and the mask size to be 3. + +# A.2. Details for Objectness Network + +Objectness Network Architecture. The object existence model employs ResNet50 (He et al., 2016) as the backbone. Following this backbone, the classification head consists of a single linear layer with output dimension of 1 and a sigmoid activation layer. The prediction for the object center field and the object boundary distance shares the same DPT-large (Ranftl et al., 2021) backbone with a 256-dimensional output size. Dense feature maps extracted from this backbone have the same resolution as input images and + +the number of channels is 256. There are two prediction heads for the prediction of the object center field and the object boundary distance, respectively. + +Table 4: Architecture of prediction heads for object center field and object boundary distance. + +
center field prediction headboundary field prediction head
typechannelsactivation stridetypechannelsactivation stride
layer 1 conv 1x1512RELU1layer 1 conv 1x1512RELU1
layer 2 conv 3x3512RELU1layer 2 conv 3x3512RELU1
layer 3 conv 1x11024RELU1layer 3 conv 1x11024RELU1
layer 4 conv 1x12RELU1layer 4 conv 1x11RELU1
+ +Objectness Network Training Strategy. The object existence model is trained using the Adam optimizer for 100K iterations with a batch size of 64. The learning rate is set to be a constant 0.0001. The object center and boundary models are jointly trained using the Adam optimizer for 50K iterations with a batch size of 16. The learning rate starts at 0.0001 and is divided by 10 at 10K and 20K iterations. + +Objectness Network Training Data. We use the ImageNet train set with about 1.28 million images as the training set for the objectness network. For each ImageNet image, its object mask is the most confident mask generated by VoteCut proposed in CuVLER (Arica et al., 2024). For the training of the object existence model, negative samples that do not contain objects are created by cropping the largest rectangle region on the background. For positive samples that contain objects, we apply the random crop augmentation onto the original ImageNet image and discard the crop without a foreground object. For the training of the object center and boundary model, we first calculate the ground truth center field and boundary distance field based on the original full ImageNet image. Then, we apply the random crop augmentation onto the original image as well as the two representations. Specifically, the scale of the random crop is between 0.08 to 1, which implies the lower and upper bounds for the random area of the crop. The aspect ratio range of the random crop is between 0.75 and 1.33. Lastly, each image is resized to $128 \times 128$ before feeding into Objectness Network. + +# A.3. Details for Multi-Object Reasoning Module + +Initial Object Proposal Generation. Motivated by anchor box generation in Faster R-CNN (Ren et al., 2015). We use five scales [32, 64, 128, 256, 512] and three aspect ratios [0.5, 1, 2]. At each scale, we randomly and uniformly sample proposal centers based on scale sizes. At each sampled center, we generate three boxes with different aspect ratios. + +![](images/fb043dfedadd226a8eb6dbcef5bbf569a70f9709626c0f54479ac1ae10ab4c7c.jpg) +Figure 9: Predefined Kernel for Center Reasoning + +Predefined Kernel for Center Reasoning. As illustrated + +in Figure 9, each position within the kernel is defined as a 2-dimensional unit vector pointing towards the center of the kernel. Specifically, the value at the kernel center with position $[2,2]$ is $(0,0)$ . The value at the $(i,j)^{th}$ position, denoted as $\mathbb{K}_{i,j}$ , is defined and normalized as: + +$$ +\mathbb {K} _ {i, j} = \frac {[ 2 , 2 ] - [ i , j ]}{\| [ 2 , 2 ] - [ i , j ] \|} +$$ + +To evaluate how Center Field matches with this anti-center pattern, we apply convolution onto Center Field with this kernel to calculate their average cosine similarity for each pixel in the Center Field. We set the threshold $\tau_{c}$ to be 0.25. + +More Details for Center Reasoning. While deriving the anti-center map with the predefined kernel, we also find the boundary of the Center Field. Since on the anti-center map, values at the boundary of the Center Field will also be positive, we thus ignore the values on the Center Field boundary. Examples of center reasoning are provided in Figure 10. + +More Details for Boundary Reasoning. Let $\pmb{f}_p^b \in \mathcal{R}^{128 \times 128 \times 1}$ be the distance field for proposal $P$ and $\nabla f_p^b \in \mathcal{R}^{128 \times 128 \times 2}$ is the gradient map for $f_p^b$ , where $\nabla f_p^b[u, v] = (\frac{\partial f_p^b}{\partial u}, \frac{\partial f_p^b}{\partial v})$ . And $\| \nabla f_p^b \| \in \mathcal{R}^{128 \times 128 \times 1}$ is the norm for the gradient map. To make the bounding box update more stable, we use two strategies: (1) Use the averaged distance field gradient to replace the gradient at a single pixel position; (2) Apply adjustment on the calculated update step for a more aggressive expansion and conservative contraction. + +(1) Since the distance field within the object and outside the object are normalized separately, the gradient average operation needs to be applied separately. Thus, we first apply sigmoid $\sigma$ function onto the boundary field to generate mask for foreground $\sigma (\pmb{f}_p^b)$ and background $1 - \sigma (\pmb{f}_p^b)$ . Then gradients are averaged separately on the two masks and combined as the averaged gradient norm map for the distance field $AVG(\| \nabla f_p^b\|)\in \mathcal{R}^{128\times 128\times 1}$ . We replace $\| \nabla f_p^b\|$ with $AVG(\| \nabla f_p^b\|))$ when calculating box updates. + +$$ +\begin{array}{l} A V G \left(\| \nabla \boldsymbol {f} _ {p} ^ {b} \|\right) = \frac {\sum \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right) \cdot \| \nabla \boldsymbol {f} _ {p} ^ {b} \|}{\sum \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right)} \cdot \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right) (7) \\ + \frac {\sum \left(1 - \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right)\right) \cdot \| \nabla \boldsymbol {f} _ {p} ^ {b} \|}{\sum \left(1 - \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right)\right)} \cdot \left(1 - \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right)\right) (8) \\ \end{array} +$$ + +(2) Empirically, box contraction needs to be more conservative since objects could be overlooked if the proposal is over-tightened. For example, for a person wearing a tie, if the proposal around the person gets shrunk too much, the object of interest may transfer to the tie instead. Also, for efficiency, it is suitable to make more aggressive expansion + +since objects can still be well seen from a proposal larger than its tightest bounding box. Thus, we further adjust the calculated updates with an adjustment ratio $\tau_{adjust} = 0.5$ . Instead of directly using Eq. 6, we use the following formulas to calculate boundary update: + +$$ +P ^ {u _ {1}} \leftarrow P ^ {u _ {1}} - \frac {\operatorname* {m a x} \left(\boldsymbol {f} _ {p _ {t}} ^ {b}\right)}{\left\| \frac {\partial \boldsymbol {f} _ {p _ {t}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {t}} ^ {b}}{\partial v} \right\|} - \tau_ {\text {a d j u s t}} * \frac {\left\| \operatorname* {m a x} \left(\boldsymbol {f} _ {p _ {t}} ^ {b}\right) \right\|}{\left\| \frac {\partial \boldsymbol {f} _ {p _ {t}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {t}} ^ {b}}{\partial v} \right\|}, +$$ + +$$ +w h e r e (u, v) = \operatorname {a r g m a x} _ {p _ {t}} f _ {p _ {t}} ^ {b} \tag {9} +$$ + +$$ +P ^ {v _ {1}} \longleftarrow P ^ {v _ {1}} - \frac {\max \left(\boldsymbol {f} _ {p l} ^ {b}\right)}{\| \frac {\partial \boldsymbol {f} _ {p l} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p l} ^ {b}}{\partial v} \|} - \tau_ {a d j u s t} * \frac {\| \max \left(\boldsymbol {f} _ {p l} ^ {b}\right) \|}{\| \frac {\partial \boldsymbol {f} _ {p l} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p l} ^ {b}}{\partial v} \|}, +$$ + +$$ +w h e r e (u, v) = \operatorname {a r g m a x} f _ {p _ {l}} ^ {b} \tag {10} +$$ + +$$ +P ^ {u _ {2}} \longleftarrow P ^ {u _ {2}} + \frac {\max \left(\boldsymbol {f} _ {p _ {b}} ^ {b}\right)}{\left\| \frac {\partial \boldsymbol {f} _ {p _ {b}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {b}} ^ {b}}{\partial v} \right\|} + \tau_ {a d j u s t} * \frac {\left\| \max \left(\boldsymbol {f} _ {p _ {b}} ^ {b}\right) \right\|}{\left\| \frac {\partial \boldsymbol {f} _ {p _ {b}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {b}} ^ {b}}{\partial v} \right\|}, +$$ + +$$ +w h e r e (u, v) = \operatorname {a r g m a x} f _ {p _ {b}} ^ {b} \tag {11} +$$ + +$$ +P ^ {v _ {2}} \longleftarrow P ^ {v _ {2}} + \frac {\operatorname* {m a x} \left(\boldsymbol {f} _ {p _ {r}} ^ {b}\right)}{\left\| \frac {\partial \boldsymbol {f} _ {p _ {r}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {r}} ^ {b}}{\partial v} \right\|} + \tau_ {\text {a d j u s t}} * \frac {\left\| \operatorname* {m a x} \left(\boldsymbol {f} _ {p _ {r}} ^ {b}\right) \right\|}{\left\| \frac {\partial \boldsymbol {f} _ {p _ {r}} ^ {b}}{\partial u} , \frac {\partial \boldsymbol {f} _ {p _ {r}} ^ {b}}{\partial v} \right\|}, +$$ + +$$ +w h e r e (u, v) = \operatorname {a r g m a x} f _ {p _ {r}} ^ {b} \tag {12} +$$ + +Parameters for Proposal Updating. Each proposal undergoes at most 50 iterations of updates. For efficiency, we stop a proposal from being updated once it meets the following criteria. Specifically, the calculated maximum expansion for the proposal should be smaller than 0 (it means the border moves outside of the object boundary), and the maximum shrinkage should be smaller than a small margin, which we set to be 16 pixels. While it is acceptable for the proposal to be slightly larger than the tightest bounding box, it should not be smaller. Examples of boundary reasoning can be found in Figure 10. + +# A.4. Details for Object Mask and Confidence Score Calculation + +For a converged proposal $P$ , we can compute its object mask $M_{p}$ as the union of mask from center field and mask from boundary field: + +$$ +\boldsymbol {M} _ {p} ^ {\text {c e n t e r}} = \left\{ \begin{array}{l l} 1, & \text {i f} \| f _ {p} ^ {c} \| \geq 0. 5 \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {13} +$$ + +$$ +\boldsymbol {M} _ {p} ^ {\text {b o u n d a r y}} = \left\{ \begin{array}{l l} 1, & \text {i f} \sigma \left(\boldsymbol {f} _ {p} ^ {b}\right) \geq 0. 5 \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {14} +$$ + +$$ +\boldsymbol {M} _ {p} = \cup \left(\boldsymbol {M} _ {p} ^ {\text {c e n t e r}}, \boldsymbol {M} _ {p} ^ {\text {b o u n d a r y}}\right) \tag {15} +$$ + +To calculate the confidence score $conf_{p}$ for proposal $P$ , we consider its object existence score, center field, and boundary field. Specifically, we also consider mask area when calculating the confidence by comparing the object area in $P$ with other objects' areas within the same image. Suppose there are $K$ discovered objects within the image, the final score is calculated as: + +$$ +\operatorname {c o n f} _ {p} = f _ {p} ^ {e} * \max \left(\| \boldsymbol {f} _ {p} ^ {c} \|\right) * \max \left(\boldsymbol {f} _ {p} ^ {b}\right) * \left(\frac {\sum M _ {p}}{\max _ {k \in K} \sum M _ {k}}\right) ^ {0. 2 5} \tag {16} +$$ + +# A.5. Details for Pseudo Label Processing + +Given a set of discovered objects from scene images, we perform selection and assign each of them a weight to use them as pseudo labels for training the detector. Following the definition in the Section A.4, an object proposal $P$ will be selected if it satisfies three conditions below: + +$$ +\boldsymbol {f} _ {p} ^ {e} \geq \tau_ {\text {c o n f}} ^ {e}; \max \left(\left\| \boldsymbol {f} _ {p} ^ {c} \right\|\right) \geq \tau_ {\text {c o n f}} ^ {c}; \max \left(\boldsymbol {f} _ {p} ^ {b}\right) \geq \tau_ {\text {c o n f}} ^ {b} \tag {17} +$$ + +The three thresholds correspond to object existence score $(\tau_{conf}^{e})$ , maximum norm in center field $(\tau_{conf}^{c})$ and maximum value in boundary distance field $(\tau_{conf}^{b})$ . In our paper, we set: + +$$ +\tau_ {c o n f} ^ {e} = 0. 5; \quad \tau_ {c o n f} ^ {c} = 0. 8; \quad \tau_ {c o n f} ^ {b} = 0. 7 5 \tag {18} +$$ + +For each selected proposal, its weight for the detector training is determined by its relative area in the scene image: + +$$ +\left(\frac {\sum M _ {p}}{\max _ {k \in K} \sum M _ {k}}\right) ^ {0. 2 5}. +$$ + +# A.6. Details for Detector Training + +The architecture for the Class Agnostic Detector is Cascade Mask RCNN. All experiments are performed with the Detector2 (Wu et al., 2019) platform. Detectors are optimized for 25K iterations using SGD optimizer with a learning rate of 0.005 and a batch size of 16. We use a weight decay of 0.00005 and 0.9 momentum. Following CutLER (Wang et al., 2023a), we also use copy-paste augmentation with a uniformly sampled downsample ratio between 0.3 and 1.0. + +# A.7. Details for Datasets + +COCO (Lin et al., 2014): The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection and segmentation dataset. The COCO in the paper refers to the 2017 version that contains 118K training images and 5K validation images. + +COCO 20K (Lin et al., 2014): COCO 20K is a subset of the COCO trainval2014 with 19817 images. Since it contains images from both training and validation set from the 2014 + +version of COCO, this dataset is generally used to evaluate unsupervised approaches. + +LVIS (Gupta et al., 2019): LVIS (Large Vocabulary Instance Segmentation) is a dataset for long tail instance segmentation. It contains 164,000 images with more than 1,200 categories and more than 2 million high-quality instance-level segmentation masks. + +KITTI (Geiger et al., 2012): KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. Our method is evaluated with 7521 images from its trainval split. + +PASCAL VOC (Everingham et al., 2010): The PASCAL Visual Object Classes (VOC) 2012 dataset is a widely used benchmark for object detection, containing 1464 training images and 1449 validation images. + +Object365 V2 (Shao et al., 2019): Objects365 is a large-scale object detection dataset. It has 365 object categories and over 600K training images. We evaluate our method in terms of object detection on its validation split with 80K images. + +OpenImages V6 (Kuznetsova et al., 2020): OpenImages V6 is a large-scale dataset, consists of 9 million training images, 41,620 validation samples, and 125,456 test samples. We evaluate our method in terms of object detection on its validation split. + +# A.8. More Results on COCO* Validation Set + +We present a detailed evaluation on COCO* validation dataset based on object count in Table 5. We can see that, when the number of objects in each image is rather small (e.g., [0 - 4]), the results of top-performing baselines Vote-Cut/CuVLER are comparable to our method, all yielding high scores. However, as the number of objects per image increases (e.g., $\geq 5$ objects), our unMORE $_{disc}$ / unMORE consistently outperforms all baselines by growing margins, demonstrating the superiority of our method in dealing with challenging crowded images. + +Notably, UnSAM achieves high $\mathrm{AR}^{box} / \mathrm{AR}^{mask}$ scores (used in the original UnSAM paper to measure the average recall rate without limiting the number of predictions), but its $\mathrm{AR}_{100}^{box} / \mathrm{AR}_{100}^{mask}$ scores (only considers the top 100 predictions per image and commonly adopted for object segmentation) are clearly lower. This is because UnSAM focuses on excessively partitioning images by clustering granular segments, which sacrifices the accuracy of object discovery, but tends to oversegment objects. This is also qualitatively validated in the Figure 7 and Figure 8 in the main paper. + +Table 5: Quantitative results on COCO* validation set based on object count. "# of GT obj." refers to the average number of ground truth objects per image, while "# of pred obj." refers to that of predicted objects. + +
# of GT obj.Direct Object DiscoveryTraining Detectors
APboxAPboxAPboxARboxARboxAPmaskAPmaskAPmaskARmaskARmask# pred objAPboxAPboxAPboxARboxARboxAPboxAPboxAPmaskAPmaskARmaskpred obj
[0,4]MaskCut (K=3)25.112.313.328.528.522.58.910.624.224.21.8UnSAM15.510.510.266.473.315.910.410.160.165.5244.1
MaskCut (K=10)24.511.712.929.329.321.98.810.324.824.81.9CutLER55.235.434.461.261.251.229.028.652.452.4100.0
VoteCut38.921.122.039.139.137.017.419.034.334.38.5CutVLER56.936.135.160.560.553.630.129.952.652.699.9
unMORE_box (Ours)42.121.223.238.738.742.122.022.837.837.85.9unMORE (Ours)55.333.733.159.659.629.529.552.852.8100.0
[5,9]MaskCut (K=3)10.74.85.310.410.49.43.44.39.09.01.9UnSAM13.38.68.749.962.013.58.58.546.256.3317.9
MaskCut (K=10)11.45.05.511.411.49.63.64.49.99.92.1CutLER37.721.421.749.349.333.316.717.542.542.5100.0
VoteCut17.27.68.617.517.515.46.77.415.015.09.0CutVLER39.021.121.948.448.434.116.617.841.341.399.6
unMORE_box (Ours)25.212.713.725.825.824.012.312.824.224.27.9unMORE (Ours)40.821.822.849.949.937.018.619.644.444.4100.0
[10,14]MaskCut (K=3)5.12.32.74.94.94.41.61.84.34.31.9UnSAM11.26.76.938.552.711.36.76.836.648.6378.1
MaskCut (K=10)5.42.42.75.55.54.71.41.94.84.82.3CutLER26.313.214.340.340.322.810.211.534.934.9100.0
VoteCut8.83.03.99.59.57.23.63.38.18.19.2CutVLER28.513.715.239.739.724.911.112.434.034.099.7
unMORE_box (Ours)18.08.29.419.319.316.77.78.618.518.59.4unMORE (Ours)33.416.717.943.243.230.514.315.738.638.6100.0
[15,+])MaskCut (K=3)1.80.50.71.91.91.60.40.61.61.62.0UnSAM8.95.25.524.840.98.64.95.224.238.2475.7
MaskCut (K=10)1.70.50.82.12.11.50.40.71.91.92.3CutLER19.39.010.029.029.015.56.77.625.125.1100.0
VoteCut4.21.41.84.64.63.21.21.44.04.09.3CuVLER21.49.710.928.428.417.07.18.324.424.499.6
unMORE_box (Ours)13.66.57.114.014.012.35.66.313.413.412.2unMORE (Ours)29.014.215.333.433.424.811.012.530.030.0100.0
+ +# A.9. Details for CAD Training Settings + +In Sec 4.1 Group 3, since four methods train CAD with different settings, we adopt a diverse range of training settings, which are detailed as follows. The best setting for each method is marked with **bold**. Full results for all settings on COCO* validation set are in Table 6. + +1) For UnSAM, it has two detectors trained under two settings below. Both models are from the original paper and are included for reference. +- Setting #1: It trains a detector on pseudo objects discovered by MaskCut on ImageNet train set, and then the detector is used to infer scene images jointly with MaskCut. +- Setting #2: The detector trained in its Setting #1 is used to infer pseudo objects on SA-1B train set. Another Mask2Former is trained on these pseudo labels for inference on scene images. +2) For CutLER, it has three detectors trained under three settings below. The Settings #1/#2 are fairly comparable with our Settings #1/#2, whereas its Setting #3 is from the original paper. +- Setting #1: It is trained on pseudo objects discovered by its own MaskCut on COCO train set. +- Setting #2: It is trained on two groups of pseudo labels: one group from its discovered objects on COCO train set, another from object pseudo labels generated by MaskCut on ImageNet train set. +- Setting #3: It is trained on object pseudo labels generated by MaskCut on ImageNet train set. +3) For CuVLER, it has four detectors trained under four settings below. The Settings #1/#2 are fairly comparable with our Settings #1/#2, whereas its Settings #3/#4 are from the original paper. +- Setting #1: It is trained only on pseudo objects discovered by its own VoteCut on COCO train set. + +- Setting #2: It is trained on two groups of pseudo labels: one group from its discovered objects on COCO train set, another from object pseudo labels generated by VoteCut on ImageNet train set. +- Setting #3: It is trained only on object pseudo labels generated by VoteCut on ImageNet train set. +- Setting #4: It first uses the detector of Setting #3 to infer object pseudo labels on COCO train set, and then trains a new detector on these pseudo labels. + +4) For our method, named unMORE, we train two separate detectors under two settings: + +- Setting #1: It is trained only on pseudo objects discovered by our method on COCO train set. +- Setting #2: It is trained on two groups of pseudo labels: one group from our discovered objects on COCO train set, another from object pseudo labels generated by VoteCut on ImageNet train set. + +# A.10. Results on the Original COCO Validation Set + +This section presents the experiment results evaluated on original COCO validation set. Table 7 shows the quantitative results on COCO validation set. Table 8 shows quantitative results of detectors with different settings on the original COCO validation set. + +# A.11. Details for Ablation Settings + +As mentioned in Sec 5, We explore various combinations of these representations to train objectness network, which then discovers objects as pseudo labels for the final detector. Details of ablation settings are as follows: + +1) Only using a binary mask as the object-centric representation: In the task of object segmentation, a binary mask is probably the most commonly-used object representation. In particular, we remove all of our three object-centric representations, but just train the same objectness network to predict a binary mask. Then, when discovering multi- + +Table 6: Quantitative results of detectors with different settings on COCO* validation set. + +
Training Settings\(AP^{box}_{50}\)\(AP^{box}_{75}\)\(AP^{box}\)\(AR^{box}_{100}\)\(AP^{mask}_{50}\)\(AP^{mask}_{75}\)\(AP^{mask}\)\(AR^{mask}_{100}\)
UnSAMSetting #13.52.12.330.53.22.02.127.2
Setting #210.26.36.436.110.26.26.334.1
CutLERSetting #121.210.811.633.418.28.19.127.7
Setting #223.611.812.633.719.88.39.528.4
Setting #326.014.214.737.922.711.211.832.7
CuVLERSetting #126.113.214.136.022.610.311.330.6
Setting #227.013.014.235.023.210.111.429.8
Setting #327.214.014.937.223.210.711.830.2
Setting #428.014.815.537.824.411.712.632.1
unMORE (Ours)Setting #131.215.616.840.028.812.714.936.1
Setting #232.617.218.040.929.614.415.536.5
+ +Table 7: Quantitative results on the original COCO validation dataset. + +
Trainable Module\( \mathsf{AP}_{50}^{box} \)\( \mathsf{AP}_{75}^{box} \)\( \mathsf{AP}_{\text{box}}^{box} \)\( \mathsf{AR}_{100}^{box} \)\( \mathsf{AR}_{\text{box}}^{box} \)\( \mathsf{AP}_{50}^{\text{mask}} \)\( \mathsf{AP}_{75}^{\text{mask}} \)\( \mathsf{AP}_{\text{mask}}^{box} \)\( \mathsf{AR}_{100}^{\text{mask}} \)\( \mathsf{AR}_{\text{mask}}^{\text{mask}} \)avg. # obj.
Direct Object Discoveryw/o Learnable ModulesFreeMask-4.10.71.44.34.33.50.41.13.43.43.7
MaskCut (K=3)-6.42.53.17.77.75.41.82.36.56.51.8
MaskCut (K=10)-6.02.73.18.28.25.51.72.26.96.92.1
VoteCut-11.05.05.612.412.49.44.04.610.510.58.9
w/o Learnable ModulesDINOSAURRecon. SlotAtt2.10.20.65.55.50.80.10.22.52.57.0
FOUNDSeg. Head4.72.12.34.54.53.71.51.83.73.71.0
\( \mathsf{unMORE}_{disc} \) (Ours)Obj. Net15.76.97.916.516.514.76.97.515.915.98.2
Training Detectors-UnSAMDetector x 45.93.23.430.042.45.93.13.327.437.9332.2
CutLERDetector x 322.911.712.431.831.818.77.38.823.923.9100.0
CuVLERDetector x 223.412.112.832.232.220.49.610.426.826.899.7
unMORE (Ours)Obj. Network + Detector x 125.412.713.635.235.222.910.711.730.330.3100.0
+ +objects on scene images, we manually set a suitable step size to extensively search object candidates by querying the pretrained network. + +2) Only using a binary mask and an object existence score: This is to evaluate whether the object existence score can be useful for better object segmentation. In the absence of object boundary field, the binary mask representation can update bounding boxes. +3) Only using a binary mask and an object center field: This is to evaluate whether the object center field can be useful for better object segmentation. In the absence of object boundary field, the binary mask representation can update bounding boxes. +4) Using a binary mask, an object existence score and center field: This is to evaluate whether both object existence score and center field can be useful for better object segmentation. In the absence of object boundary field, the binary mask representation can update bounding boxes. +5) Only using an object boundary field: This is to verify the importance of object boundary field. +6) Only using an object boundary field and existence score: This is to evaluate whether adding the existence score can help object segmentation on top of the object boundary field. +7) Only using an object boundary field and center field: This is to evaluate whether adding the center field can help object segmentation on top of the object boundary field. + +8) Our full three-level object-centric representations: This is our full framework for reference. + +# A.12. More Ablations + +Selection of Fixed Step Size for Binary Baseline. Since the information provided by binary mask representation is very limited, the final discovered objects can be very sensitive to the step size. In order to choose a good step size in favor of the binary mask baseline, we randomly select 100 images from COCO* validation set and evaluate the results for a step size of 5, 15, 20, 30. According to the results shown in Table 9, we select 20 as the fixed step size. + +Ablation on Parameters for Pseudo Label Processing. We perform ablation studies on the parameters used in A.5. Specifically, we choose a wide range, i.e., $(0\sim 0.95)$ for score thresholds of object existence $\tau_{conf}^{e}$ , object center $\tau_{conf}^{c}$ and object boundary $\tau_{conf}^{b}$ on 7 datasets. As shown in Tables 10&11, more tolerant thresholds lead to higher AR scores because more objects can be discovered, but a decrease in AP because of low-quality detections. On the other hand, if thresholds are too strict, both AR and AP scores drop because only a limited number of objects are discovered. Nevertheless, our method is not particularly sensitive to the selection of thresholds as it demonstrates good performance across different thresholds. + +Ablation on Random Cropping Augmentation for the Objectness Network. During training our objectness network on ImageNet, we originally apply random cropping + +Table 8: Quantitative results of detectors with different settings on the original COCO validation set. + +
Training Settings\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{mask}\)\(AP_{75}^{mask}\)\(AP^{mask}\)\(AR_{100}^{mask}\)
UnSAMSetting #12.11.11.227.01.80.91.023.5
Setting #25.93.23.430.05.93.13.327.4
CutLERSetting #119.39.910.629.416.37.38.223.2
Setting #220.810.411.129.717.27.08.123.3
Setting #321.911.812.332.718.99.29.727.0
CuVLERSetting #122.911.712.431.818.77.38.823.9
Setting #223.211.312.331.219.78.59.524.9
Setting #322.911.812.632.919.38.99.825.1
Setting #423.412.112.832.220.49.610.426.8
unMORE (Ours)Setting #124.111.212.534.222.29.911.129.9
Setting #225.412.713.635.222.910.711.730.3
+ +Table 9: Results of different step sizes for binary baseline on COCO* validation set. + +
step size\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{mask}\)\(AP_{75}^{mask}\)\(AP^{mask}\)\(AR_{100}^{mask}\)
510.44.14.910.89.43.74.39.6
1512.45.86.412.110.74.85.410.7
2013.16.36.912.111.65.35.910.7
3011.85.46.111.510.14.65.110.1
+ +augmentation. Here, we conduct an additional ablation study by omitting the random cropping operation during training the objectness network while keeping all other settings the same. Table 12 shows the quantitative results on the COCO* validation set. We can see that random cropping is indeed helpful for the objectness network to learn robust center and boundary fields. Primarily, this is because during the multi-object reasoning stage, many proposals just have partial or fragmented objects, but the random cropping augmentation inherently enables the objectness network to infer rather accurate center and boundary field for those partial objects, thus driving the proposals to be updated correctly. + +Ablation on Rough Masks for Training Objectness Network. We conducted the following ablation study on four types of pseudo-masks: + +- SelfMask (Shin et al., 2022b): For each image, we employ the strong unsupervised saliency detection model SelfMask to predict a salient region as the pseudo label. +- MaskCut: For each image, we use the first object discovered by MaskCut as the pseudo label. +VoteCut: It's used in our main paper. +- VoteCut+SAM: For each image, a rough mask is generated by VoteCut, and its bounding box is used as a prompt for SAM to predict the final pseudo mask. While this yields the best pseudo labels, SAM is a fully supervised model, so this ablation is for reference only. +As shown in Table 13, our method is amenable to all types of rough masks, though their quality affects unMORE\_disc performance. While SAM scores highest, its improvement over VoteCut is not substantial, as it still relies on bounding box prompts from VoteCut. Importantly, our method does not depend on specific pretrained features, enabling the use of enhanced pretrained models in the future. + +# A.13. Time Consumption and Throughput + +Time consumption is summarized in Table 14. unMORE $_{disc}$ takes 10 hours to train the objectness network and is slower for Direct Object Discovery. However, our subsequent detector unMORE requires only 30 hours to train, benefiting from the high-quality pseudo labels from unMORE $_{disc}$ , while baseline detectors take over 60 hours. Ultimately, the inference speed of our unMORE matches that of CutLER and CuVLER. + +Regarding the throughput, for each image on average, the number of initial proposals is 1122.7, whereas the number of predicted objects from unMORE\_disc is 8.9. Most initial proposals have low existence scores and are discarded at the first iteration. The Non-Maximum Suppression (NMS) will also remove redundant proposals. + +# A.14. Failure Cases + +We present failure cases in Figure 11 and discuss limitations as follows. + +1. Direct Object Discovery of unMORE $_{disc}$ takes time. It could be possible to leverage reinforcement learning techniques to learn an efficient policy net to discover objects. +2. Our method struggles to separate overlapping objects with similar textures, as shown in the attached Figure 11. Additional language priors may help alleviate this issue. + +# A.15. More Visualizations + +Table 10: Ablation results for thresholds of object existence $\tau_{conf}^{e}$ , object center $\tau_{conf}^{c}$ and object boundary $\tau_{conf}^{b}$ on COCO* validation set. + +
τeconfτcconfτbconfAPbox50APbox75APboxAR100APmask50APmask75APmask100
0.00.80.7531.216.717.441.028.714.615.3
0.250.80.7531.516.717.540.828.614.315.2
0.50.80.7532.617.218.040.929.614.415.5
0.750.80.7530.816.216.938.927.713.314.3
0.950.80.7528.113.414.734.424.310.712.1
0.50.00.7532.516.417.540.029.213.614.9
0.50.250.7531.816.417.339.928.513.514.7
0.50.50.7531.016.217.040.227.713.314.4
0.50.80.7532.617.218.040.929.614.415.5
0.50.950.7529.815.816.538.126.813.214.1
0.50.80.031.816.017.038.728.413.214.5
0.50.80.2531.216.117.038.927.813.214.3
0.50.80.531.716.917.540.628.413.714.7
0.50.80.7532.617.218.040.929.614.415.5
0.50.80.9531.617.517.939.828.013.314.5
+ +Table 11: Ablation results for thresholds of object existence $\tau_{conf}^{e}$ , object center $\tau_{conf}^{c}$ and object boundary $\tau_{conf}^{b}$ on COCO20K, LVIS, KITTI, VOC, Object365 and OpenImages. + +
COCOCOCO20KLVISKITTIVOCObject365OpenImages
τeconfτc confτbconfAPbox50ARbox100APmask50ARmask100APbox50ARbox100APmask50ARmask100APbox50ARbox100APmask50ARmask100APbox50ARbox100APbox50ARbox100APbox50ARbox100APbox50ARbox100
0.00.80.7523.835.121.930.824.335.222.631.110.224.99.022.625.332.538.546.923.636.318.329.5
0.250.80.7524.134.822.030.324.635.022.630.610.224.48.721.925.034.039.146.623.836.018.729.4
0.50.80.7525.435.222.930.325.935.423.630.510.424.18.921.426.734.840.447.424.735.919.029.5
0.750.80.7524.533.721.928.825.134.122.729.29.922.58.320.025.533.640.446.723.836.018.729.4
0.950.80.7523.230.219.925.023.830.520.625.38.718.86.916.321.629.639.443.721.630.018.826.5
0.50.00.7525.734.522.829.826.234.823.430.110.423.38.520.928.735.541.347.024.535.119.729.0
0.50.250.7525.034.422.229.525.634.823.029.810.123.28.320.627.733.641.046.823.835.119.329.0
0.50.50.7524.534.721.829.925.134.822.530.19.823.68.021.124.132.740.346.723.335.319.929.7
0.50.80.7525.435.222.930.325.935.423.630.510.424.18.921.426.734.840.447.424.7.35.919.029.5
0.50.950.7523.732.921.128.324.333.221.828.59.621.68.219.325.733.338.645.622.533.218.328.4
0.50.80.024.733.421.928.725.333.622.629.010.122.38.219.827.433.440.045.923.633.819.328.3
0.50.80.2524.633.621.828.925.334.022.529.39.822.48.019.826.733.540.746.123.234.119.728.6
0.50.80.525.335.222.430.025.935.323.130.410.023.68.420.925.434.341.347.823.735.819.929.9
0.50.80.7525.435.222.930.325.935.423.630.510.424.18.921.426.734.840.447.424.7 .35.919.029.5
0.50.80.9520.432.219.728.624.434.422.729.810.523.39.021.029.735.137.646.423.834.817.829.2
+ +Table 12: Ablation results on COCO* validation set for random cropping augmentation of the objectness network. + +
APbox50APbox75APboxARbox100APmask50APmask75APmaskARmask100
unMOREdisc(with random cropping)19.19.010.119.617.88.79.518.9
unMOREdisc(w/o random cropping)15.77.58.218.115.66.67.917.4
+ +Table 13: Ablation study for rough masks. + +
Rough MasksSSL features / Supervision\(AP_{50}^{box}\)\(AP_{75}^{box}\)\(AP_{100}^{box}\)\(AR_{100}^{box}\)\(AR_{100}^{box}\)\(AP_{50}^{mask}\)\(AP_{75}^{mask}\)\(AP_{100}^{mask}\)\(AR_{100}^{mask}\)
unMORE\(_{disc}\)SelfMaskDINO_b16, MoCov2, SwAV13.26.14.816.416.412.05.05.615.3
MaskCutDINO_b816.37.36.417.717.714.35.76.118.7
VoteCutDINO_b8, DINO_s8, DINO_b16, DINO_s16, DINOv2_s14, DINOv2_b1419.19.010.119.619.617.88.79.518.9
VoteCut + SAMsupervised on SA-1B dataset21.99.110.719.719.718.49.29.919.1
+ +Table 14: Training and inference time of different methods. For a fair comparison, all methods are evaluated on the same hardware configurations. + +
Training Time (hours in total)Inference Efficiency (seconds per image)
Direct Object DiscoveryMaskCut (N=3) -MaskCut (N=10) -VoteCut -unMOREdisc 10.1MaskCut (N=3) 11.3MaskCut (N=10) 33.7VoteCut 5.1unMOREdisc 45.3
Training DetectorsUnSAM 90.0CutLER 75.0CuVLER 60.0unMORE 30.0UnSAM 3.0CutLER 0.1CuVLER 0.1unMORE 0.1
+ +![](images/5264db0c35b1c7e34f5f63abef41176b39bac19f5c12d068e7f12156bd4d55c6.jpg) +Multi-Object Reasoning Overview + +![](images/18927095e069307b8921d86b8fd8bdf6be7dee83fecd03cbf082b1bd9db36b8f.jpg) + +![](images/7eba49995c261cd31cbc83c9ea85f95612ab190f8d25a69ab39dbba1070660ab.jpg) +Figure 10: Multi-object reasoning with object center and boundary representations on a multi-object image. + +![](images/8629c5271cdbd3e80e602824b0aa325f7ed5547365e37780764aed2f589669c4.jpg) +Figure 11: Failure cases of unMOREdisc. + +# A.16. Details of COCO* Validation Set + +In COCO*, we exhaustively label objects in the COCO val2017 dataset, which comprises 5,000 images and originally contains 36,781 instances across 90 categories. We have added 197 new object categories and labeled previously unannotated objects within the original COCO categories. In total, COCO* includes 5,000 images, 287 categories, and 47,117 labeled objects. Details for the annotated categories are provided in Table 15. We use SAM (Kirillov et al., 2023) to expedite the labeling process. We label each object of interest with a tightest bounding box around it. This bounding box, along with the full image, is then fed into the SAM model to generate a dense binary mask. + +Table 15: Details of COCO* validation set. This table includes the unique class IDs, class names and the number of newly labeled objects that belong to each class. Specifically, the newly introduced classes are assigned with IDs from 100 to 297. Apart from the 197 new categories, we also label objects belonging to the original COCO classes (the id between 1-90) that are not labeled in COCO validation 2017. In summary, we have labeled 10,336 objects in addition to the original 36,781 objects on COCO validation 2017, resulting in 47,117 objects on 5,000 images. + +
idclass namecountidclass namecountidclass namecountidclass namecount
3car9128tissue184183cabbage24247corn9
11fire hydrant1129rice27184cucumber39248plum5
15bench6130painting445185calendar13249MP3 player6
17cat2131board40186pinapple19250garlic3
20sheep3132ballon49187key11251scallion2
33suitcase1133camera71188pumpkin6252noodle9
44bottle175134handler73189ball15253soup14
47cup44135soap19190calculator6254onion6
49knife5136brush37191flashlight8255sausage20
50spoon8137shower21192usb13256vegetable19
51bowl17138beetroot6193potato15257fishbowl4
53apple19139meat102194ipad5258wallet3
56broccoli1140bridge11195pad40259buoy15
57carrot11141grape55196banner174260roadblock56
59pizza4142cheese10197funnel3261chocolate12
61cake12143clothes102198blender30262shell7
62chair34144box186199name tag125263wool5
63couch2145curtain228200jar74264avocado1
67dining table2146beans15201flag156265charger9
70toilet10147dustbin131202peach4266card4
75remote1148broom6203radio5267coin4
76keyboard63149stand86204helmet466268wire9
77cell phone4150statue69205cart32269piano6
79oven11151fries16206toothpaste14270chinaware13
81sink35152plastic bag104207coconut6271balance2
82refrigerator1153blanket71208salmon21272pancake3
84book18154bathtub38209tongs1273pepper8
86vase16155stationary59210CD player34274eggplant2
101cabinet291156sauce47211heater18275napkin18
102carpet65157poster194212air conditioner12276table stand3
103lamp495158sail5213butterfly22277kiwifruit1
104basket87159rhino3214tent15278fig1
105pillow312160paper142215salad18279soother2
106mirror67161hook28216spagatti6280pomelo2
107pot227162hand dryer1217gravestone9281guita2
108hat179163tomato53218arcade game machine1282screen15
109scarf13164lemon18219chips12283cargo2
110flower253165snail1220fish16284map4
111applicance82166candle70221pig1285coffee machine1
112can71167teapot46222dish71286dishwasher1
113skate shoe189168moon4223CD30287soap stand1
114glove143169strawberry26224doll29288shelf12
115stove45170paperbag20225watermelon6289prize0
116watch38171lid30226cherry4290tower5
117ornament187172earphone32227cream12291picture13
118oar4173egg28228toy43292vent5
119speaker90174butter10229pomegranate1293baggage tag32
120printer22175tap220230rolling pin2294biscuit7
121monitor4176fan38231envelopop3295telescope1
122basin75177switch128241sticker51296pear5
123road sign555178telephone34242dough7297ferris wheel2
124towel213179socket114243pan12298lizard1
125ashtray7180bag86244peanut1
126plate190181quilt46245billboard154
127bread87182tank11246ladder6
+ +# A.17. More Results and Analysis of Object-centric Representations + +In this section, we provide more insights into the comparison between our proposed center-boundary representations and self-supervised features. In particular, we experiment with four pre-trained models from DINO and two pre-trained models from DINOv2, with different patch sizes and/or model parameter scales. + +Motivated by NCut (Shi & Malik, 2000) algorithm, given a set of image features, we construct a weighted graph. The weight on each edge is computed as the similarity between features, formulating an affinity matrix $W$ . Then, we solve an eigenvalue system $(D - W)x = \lambda Dx$ for a set of eigenvectors $x$ and eigenvalues $\lambda$ , where $D$ is the diagonal matrix. In Figures 12 & 13 & 14 & 15 & 16, we visualize the eigenvectors corresponding to the 2nd, 3rd, and 4th smallest eigenvalues. Specifically, we resize all eigenvectors to be the same size as the source image. + +In practice, methods like TokenCut (Wang et al., 2023b) and CuVLER (Arica et al., 2024) directly use the eigenvector corresponding to the 2nd smallest eigenvalue and perform clustering onto it. + +From Figures 12 & 13 & 14 & 15 & 16, we observe that segmenting objects via grouping pre-trained self-supervised features: 1) focuses on large objects that dominate the image, while ignoring objects with smaller sizes, 2) tends to capture semantic similarity / background-foreground contrast, instead of objectness. For example, in Figure 12, only the "bed" object with a large size can be discovered by clustering eigenvectors. In Figure 13, the two "keyboards", two "monitors", and two "speakers" are hard to be distinguished into separate clusters. Such behaviors are fundamentally due to the training of self-supervised features only involving image-level contrast, which can hardly lead to fine-grained object understanding. + +In contrast, as shown in the last row of Figures 12 & 13 & 14 & 15 & 16, our proposed center and boundary representation captures more fine-grained properties that directly reflect objectness, which naturally leads to better object discovery results. It should be noted that the merged center field and merged boundary distance field are derived by combining all proposals with their predicted center field and boundary distance field, instead of predicted in one pass. + +![](images/cdc0aae72d2e1685aa33d6063ff7524f564ab6bf7553e1e464255bfae40c93db.jpg) +Figure 12: Comparison between DINO/DINOv2 features with proposed boundary-center representations. The eigenvectors are reshaped to be the size of the image. The last row shows the illustrations for the proposed center and boundary distance representations (predicted). + +![](images/6adb90b433a118197324fbb948b5ab0d375cf4995deebc92af56d9c92078ade6.jpg) +Figure 13: Comparison between DINO/DINOv2 features with proposed boundary-center representations. The eigenvectors are reshaped to be the size of the image. The last row shows the illustrations for the proposed center and boundary distance representations (predicted). + +![](images/a411b0567955c9ea9210e9c5f1a8e38cb491d0358bc87a9887dfec11fb2a705d.jpg) +Figure 14: Comparison between DINO/DINOv2 features with proposed boundary-center representations. The eigenvectors are reshaped to be the size of the image. The last row shows the illustrations for the proposed center and boundary distance representations (predicted). + +![](images/ede0af5d22f69dbb4c3003d886577d1eb74fc9184d00c433e63f0cc4d6bb819b.jpg) +Figure 15: Comparison between DINO/DINOv2 features with proposed boundary-center representations. The eigenvectors are reshaped to be the size of the image. The last row shows the illustrations for the proposed center and boundary distance representations (predicted). + +![](images/e13355063295be5e407a60c5bcd5248823eee05eaf2fc3f433351a9694376009.jpg) +Figure 16: Comparison between DINO/DINOv2 features with proposed boundary-center representations. The eigenvectors are reshaped to be the size of the image. The last row shows the illustrations for the proposed center and boundary distance representations (predicted). + +# A.18. Efficiency of Direct Object Discovery + +For our method of direct object discovery on the COCO* validation set as described in Group 2 of Sec 4.1, in implementation, the maximum number of iterations to optimize a proposal is set to be 50. Nevertheless, in practice, as shown in Figure 17 which illustrates the relationship between the average number of pixels to increase or decrease and the number of optimization steps, we observe that all proposals tend to converge after just 10 iterations. + +![](images/99f10db61c1fc9ddd146082e0907afce5ef510da7802f4994c054af92b77969e.jpg) +Figure 17: The relationship between the average number of pixels to increase/decrease and the number of optimization steps. \ No newline at end of file diff --git a/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/images.zip b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3f8991f2fd6c79513899c3ca7ea7f84881852381 --- /dev/null +++ b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb3d70c7000518ba2ee385cb7b9cc5832d49a0e5942556f1214fa861a46327aa +size 2948347 diff --git a/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/layout.json b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4822c42750b0067f40ae26474eec7bd5c0b504c5 --- /dev/null +++ b/ICML/2025/unMORE_ Unsupervised Multi-Object Segmentation via Center-Boundary Reasoning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb855af7e7eedbc9e620356012fa36c1088a182dec1ffff3815ba4f6d1e21ea0 +size 908756 diff --git a/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_content_list.json b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..72f43f4bb5bf494eb8f7a6bdb6aa15efda8e6973 --- /dev/null +++ b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4023d28266f2871513da0e86cb5df04e4f11794392a189c87e81fbf6b8cad28 +size 128499 diff --git a/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_model.json b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0390ad798089d14a949cc6ffc97fad2cb4573121 --- /dev/null +++ b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d342717aec4557a052aa99c3e4af04c7bbb0de0a1027b3b5ccae2b30a077e525 +size 166373 diff --git a/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_origin.pdf b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5419ba8ec27ba8fba8204c1ace6ce44db81ed2b7 --- /dev/null +++ b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/e0f4ad28-3d98-4bf5-b28e-836f1aaf601b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba5c0531880438976a13afd3cbcb481aed07d813cf2b9fa22c05652d7977e198 +size 13707885 diff --git a/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/full.md b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d6c9c6542ae5a5cf4151892d41a6de0db4df7b95 --- /dev/null +++ b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/full.md @@ -0,0 +1,618 @@ +# video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model + +Guangzhi Sun $^{123}$ Yudong Yang $^{1}$ Jimin Zhuang $^{1}$ Changli Tang $^{1}$ Yixuan Li $^{1}$ Wei Li $^{2}$ Zejun Ma $^{2}$ Chao Zhang $^{1}$ + +# Abstract + +While recent advancements in reasoning optimization have significantly enhanced the capabilities of large language models (LLMs), existing efforts to improve reasoning have been limited to solving mathematical problems and focusing on visual graphical inputs, neglecting broader applications in general video understanding. This paper proposes video-SALMONN-o1, the first open-source reasoning-enhanced audio-visual LLM designed for general video understanding tasks. To enhance its reasoning abilities, we develop a reasoning-intensive dataset featuring challenging audio-visual questions with step-by-step solutions. We also propose process direct preference optimization (pDPO), which leverages contrastive step selection to achieve efficient step-level reward modelling tailored for multimodal inputs. Additionally, we introduce RivaBench, the first reasoning-intensive video understanding benchmark, featuring over 4,000 high-quality, expert-curated question-answer pairs across scenarios such as standup comedy, academic presentations, and synthetic video detection. video-SALMONN-o1 achieves $3 - 8\%$ accuracy improvements over the LLaVA-OneVision baseline across different video reasoning benchmarks. Besides, pDPO achieves $6 - 8\%$ improvements compared to the supervised fine-tuning model on RivaBench. Enhanced reasoning enables video-SALMONN-o1 zero-shot synthetic video detection capabilities. $^{1}$ + +# 1. Introduction + +The recent advancements in optimizing the reasoning process have further boosted text-based large language mod + +$^{1}$ Tsinghua University $^{2}$ ByteDance $^{3}$ University of Cambridge. Correspondence to: Chao Zhang . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +$^{1}$ Code available: https://github.com/BriansIDP/video-SALMONN-ol. + +els (LLMs) (OpenAI, 2024; DeepSeek Team, 2024; Qwen Team, 2024b; Zhao et al., 2024; Yuan et al., 2024) performance in answering complex logical questions, such as math problems (Yang et al., 2024; Wang et al., 2024b; Sun et al., 2024d; Ying et al., 2024) and coding tasks (Zhang et al., 2024f). These methods usually first split the solution into multiple simpler steps to form a reasoning path ending with the final solution, as demonstrated in chain-of-thought (CoT) (Wei et al., 2022). Advanced training approaches have been developed such as the outcome reward model (ORM) (Cobbe et al., 2021; Yu et al., 2024a; Zhang et al., 2024b) that optimizes the entire reasoning path based on the final solution, and the process reward model (PRM) (Uesato et al., 2022; Lightman et al., 2023; Luo et al., 2024; Zhang et al., 2024a) that optimizes each reasoning step based on how likely each step would lead to a correct answer. + +In addition to text-based questions, reasoning also plays an indispensable role in understanding the physical world, such as comprehending concepts in an academic presentation, interpreting complex interactions among people or even detecting artificial anomalies. Thus, improving reasoning ability is also critical for multimodal LLMs (Tang et al., 2024c; b; Sun et al., 2024b; Cheng et al., 2024; Zhang et al., 2024e; Lin et al., 2024; Team et al., 2024; Wang et al., 2024a; Tang et al., 2024a) that process audio and visual inputs in addition to text, as the interactions among multiple modalities can largely increase the difficulty of the task. To this end, investigations have been performed on optimizing the reasoning process with multimodal inputs (Du et al., 2024), and on particularly visual LLMs (Qwen Team, 2024a; Xu et al., 2024; Du et al., 2025). However, current research on enhancing reasoning capabilities for multimodal LLMs has predominantly focused on solving mathematical problems and image inputs. This overlooks the importance of reasoning in general video understanding and the interactions among audio, visual and text modalities, largely limiting their scopes of applications. + +This paper proposes video-SALMONN-o1, the first open-source reasoning-enhanced audio-visual LLM with improved reasoning abilities in general video understanding tasks. The audio-visual reasoning capability of video-SALMONN-o1 is first enhanced by creating a new dataset with challenging questions and step-by-step solutions for su + +pervised fine-tuning (SFT), and then further boosted by the proposed variant of direct preference optimization (DPO), process DPO (pDPO) (Rafailov et al., 2024; Zhang et al., 2024c). pDPO achieves step-level pairwise reward modelling via an efficient contrastive step selection approach tailored for multimodal inputs. While being more effective than the standard PRMs in general video understanding, pDPO and the step selection make audio-visual reasoning more efficient without the need for an external reward model or a two-pass re-ranking pipeline. + +To evaluate the performance on multimodal reasoning for general video understanding, we propose the first reasoning-intensive video with audio understanding benchmark (RivaBench). RivaBench primarily focuses on three representative scenarios, including standup comedy, academic presentation and synthetic video detection. In particular, RivaBench contains over 4k high-quality question-answer pairs that are carefully crafted by human experts (e.g. medical doctors). Our key contributions are summarized as follows: + +- We propose video-SALMONN-o1, the first open-source reasoning-enhanced audio-visual LLM for general video understanding tasks. +- video-SALMONN-o1 is the first to explore RL-based reasoning optimization for general video understanding. The proposed pDPO method with efficient contrastive step selection further enhances reasoning abilities. +- We propose RivaBench, the first general video understanding benchmark focusing on challenging audio-visual reasoning scenarios with human expert annotations. +- video-SALMONN-o1 consistently outperforms the strong LLaVA-OneVision visual baseline on VideoMME, NExTQA and RivaBench, with $3 - 8\%$ absolute accuracy improvements. The pDPO training achieved $6 - 8\%$ improvements on RivaBench over the SFT model. Moreover, video-SALMONN-o1 is also the first open-source model that showed zero-shot synthetic video detection ability. + +# 2. Related Work + +# 2.1. CoT Reasoning + +CoT reasoning is one of the remarkable abilities of LLMs when solving difficult and complex problems. Earlier investigations employed prompt tuning and various search algorithms such as the Monte-Carlo tree search during inference time (Hao et al., 2023; Snell et al., 2024; Feng et al., 2024; Yao et al., 2023; Goyal et al., 2024). Later on, training stage approaches using reinforcement learning (RL) were developed to further and more radically boost the reasoning capabilities of LLMs. RL has also been used in multimodal LLMs but not from the perspective of improving reasoning (Yu et al., 2024b; Zhang et al., 2024d). PRMs which estimate the value function of each reasoning step have emerged + +as one of the most prevalent approaches in reasoning optimization tasks (Uesato et al., 2022; Lightman et al., 2023; Luo et al., 2024; Zhang et al., 2024a; Li et al., 2023b). + +However, constructing step-level annotations for PRM training can be expensive and difficult to scale up. As mitigation, Wang et al. (2024b) and Luo et al. (2024) proposed automatic step annotation using rollout, which approximated the expected correctness of each step by sampling multiple paths till the end with the same prefix solution. In particular, Luo et al. (2024) treats the first wrong step as the critical step to perform rollout which was found by binary search. + +# 2.2. Reasoning in Multimodal LLMs + +Researchers have been investigating optimizing CoT reasoning for multimodal LLMs to tackle increasingly challenging tasks. Most of them focus on extracting graphical or text information from an image and solving mathematical tasks based on the extracted information. Specifically, LLaVA-CoT (Xu et al., 2024) investigated better sampling and search algorithms to find a better reasoning path for math questions with image inputs. Virgo, on the other hand, explores the fine-tuning data organization and transferability of text-based reasoning tasks to image-based reasoning tasks (Du et al., 2025). Recently, MAmmoTH-VL (Guo et al., 2024) built a large-scale multimodal instruction-tuning dataset that can improve the question-answering performance on diverse modalities including video. Different from these works, video-SALMONN-o1 particularly focuses on general video understanding scenarios, where different parts of the audio-visual information are constantly referred to during the reasoning process. + +# 2.3. Benchmarks for Audio-visual LLMs + +The fast-paced development of multimodal LLMs has boosted the creation of more challenging video understanding benchmarks. Benchmark focus evolves from video description and perception abilities (Li et al., 2021; Alamri et al., 2019; Chen et al., 2023a; Li et al., 2022; Chen et al., 2023b; Ning et al., 2023; Mangalam et al., 2023; Yun et al., 2021; Sun et al., 2024a), to video reasoning abilities such as inference about temporal and causal relations (Xiao et al., 2021; Li et al., 2024b; 2023a; Fu et al., 2024; Liu et al., 2024a; Fang et al., 2024). In particular, NExT-QA (Xiao et al., 2021) focuses on causal relation reasoning such as why a certain action is performed, and Video-MME (Fu et al., 2024) contains questions that require the combination of both audio and visual information to perform reasoning. Our proposed RivaBench has more challenging questions that require longer thinking steps, broader world knowledge and a tighter combination of audio-visual information. + +![](images/5f63d22b1fe1118aaad2fb690d01f3f94cc76a5c28fb847b5878ad1d67c91396.jpg) +Figure 1. video-SALMONN-o1 model structure. The input video is processed by the visual and audio branches, generating encodings from the visual and audio frame sequences respectively. Two encoding streams are combined in an interleaved fashion to synchronize across time before sending to LLM. + +# 3. video-SALMONN-o1 + +# 3.1. Model Structure + +We adopt the same model structure as video-SALMONN 2, as shown in Fig. 1. As video-SALMONN 2 (Tang et al., 2024a), the model is built based on a pre-trained visual LLM by adding the audio encoder branch. The input video and audio streams are processed separately by the audio encoder and visual encoder and are then separately mapped to the dimension of the LLM input via individual modality aligners. To combine the audio and visual encodings, the interleaved synchronization module is employed as illustrated in Fig. 1. The groups of encodings per visual frame are equally spaced across time, and the audio encodings corresponding to the time between two visual frames $t_1$ and $t_2$ are inserted between the two groups of visual encodings. The process is summarized as in Eqn. (1): + +$$ +\mathbf {H} ^ {\mathrm {A V}} = \operatorname {C o n c a t} \left(\dots , \mathbf {H} _ {t _ {1}} ^ {V}, \mathbf {H} _ {t _ {1}: t _ {2}} ^ {A}, \mathbf {H} _ {t _ {2}} ^ {V}, \dots\right) \tag {1} +$$ + +where $\mathbf{H}^A\in \mathbb{R}^{m\times d}$ and $\mathbf{H}^V\in \mathbb{R}^{n\times d}$ represent groups of audio and visual encodings, and $m$ and $n$ are the number of encodings in each group. + +A multi-stage SFT pipeline with the cross-entropy loss on reference response is adopted to train video-SALMONN-o1 before optimizing the reasoning process with RL. Starting from the pre-trained visual model, the audio aligner is trained from scratch keeping other parts of the model frozen. Then, using paired audio-video data, the modality aligners and the low-rank adaptation (LoRA) module (Hu et al., 2022) are trained with other parts frozen. + +# 3.2. Reasoning-intensive SFT Data + +We empirically discovered that video understanding models (Cheng et al., 2024; Zhang et al., 2024e; Lin et al., 2024) generally lose the ability to perform step-by-step reasoning when a video is given, and always directly generate the final answer. To re-obtain the reasoning ability during the SFT stage, we create a set of more challenging question-answering pairs based on the same training set videos using proprietary LLMs, and the pipeline is shown in Fig. 2. + +![](images/e475c26022f57c984f870d86ffb243aa16cd8b53fae150262510fdeee1c18f0d.jpg) +Figure 2. Acquisition pipeline of reasoning-intensive SFT data. The question, answer and reasoning paths are generated by Gemini-1.5-pro taking the video with paired audio as inputs. GPT4o is employed for quality checks to ensure the QA-pair and the reasoning steps are valid and require logical thinking. + +![](images/28a855cd26f7878bc07a91d89a41198bf79e9eb820332272e760096c0ed806b6.jpg) + +For each video with paired audio, we use Gemini-1.5-pro to generate a question-answer pair with the reasoning steps. Then, to avoid bias in Gemini models and ensure the quality of the questions and reasoning steps, a quality check stage is employed using GPT-4o. Questions with poor quality will be discarded and a new question-answer pair will be generated again. In addition to the newly created question, we augment the original training set by generating reasoning paths with Gemini-1.5-pro and checking by GPT-4o following the pipeline to avoid network learning two distinct mechanisms for reasoning and direct answer. This turned out to be important to yield competitive reasoning performance from SFT in our empirical study. + +# 4. Training to Enhance Reasoning Abilities + +# 4.1. Preliminary + +The reasoning process refers to the LLM generating the $Q \to \{s_1, s_2, \dots, s_K\} \to A$ sequence, where $Q$ is the question, $A$ is the answer and $s_k$ are reasoning steps that logically connect the question $Q$ to the final answer $A$ . By treating this as a Markov decision process (MDP) and the LLM as the policy model, PRM is to provide feedback for each step $s_k$ that guides the LLM in making accurate reasoning by optimising the policy to maximise the reward. + +Following Wang et al. (2024b), the PRM is to estimate the + +![](images/40469f98ba3b5ec0dd221348a09fff415ed07b146e65201d6ed5b604380088f1.jpg) +Figure 3. Illustration of the contrastive step selection (top) and pairwise rollout (bottom) to construct per-step expected correctness score for pDPO. Contrastive step selection: Top 2 steps, $s_2$ and $s_5$ are selected in this example, and for $s_2$ , an alternative step, $s_2'$ , is sampled to form the preference pair. Pairwise rollout: Three rollouts are shown for each step and $s_2$ and $s_2'$ are step pairs with the same prefix solution. The answer correctness is checked using GPT-4o by comparing it against the reference answer. + +expected answer correctness, $p_{s_k}$ , of a prefix solution $\{s_{1:k}\}$ . The expected correctness score can be approximated with Monte Carlo sampling of multiple paths from the prefix solution to an answer $A_n$ as shown in Eqn (2). + +$$ +p _ {s _ {k}} \approx \frac {1}{N} \sum_ {n = 1} ^ {N} \mathbb {1} \left(A _ {n} = A _ {\mathrm {r e f}}\right) \tag {2} +$$ + +where $A_{\mathrm{ref}}$ is the reference answer and $A_{n}$ is one sampled answer. The sampled path $\{s_{k + 1,n}, s_{k + 2,n}, \ldots, s_{K(n),n}\}$ that leads to $A_{n}$ is referred to as a rollout. The PRM training loss is then shown as + +$$ +\mathcal {L} _ {\mathrm {P R M}} = \sum_ {k = 1} ^ {K} p _ {s _ {k}} \log r _ {s _ {k}} + (1 - p _ {s _ {k}}) \log \left(1 - r _ {s _ {k}}\right) \tag {3} +$$ + +where $r_{s_k} \in [0,1]$ is the PRM prediction which can be derived from the LLM output at the last token of each step with a fully connected layer with a sigmoid function $\sigma(\cdot)$ . + +# 4.2. Process DPO + +As pointed out by Zhang et al. (2024a), predicting an absolute score fails to exploit the instruction-following capabilities of LLMs as well as influenced by ambiguities in score standards. Both problems are more severe in audio-visual LLMs. Therefore, we propose pDPO for video-SALMONNol, which is a pairwise preference modelling approach by training the model to select the better reasoning path rather than giving absolute scores to the paths. Different from the pairwise preference reward model (PPRM) in (Zhang et al., 2024a) that leverages the partial ordering of entire reasoning + +paths, pDPO models the preference for a specific reasoning step given the same prefix solution. Specifically, the reward function for each step of interest can be written as + +$$ +r \left(s _ {k}\right) = \beta \log \frac {\pi_ {\theta} \left(s _ {k} \mid s _ {< k} , \mathbf {H} ^ {\mathrm {A V}}\right)}{\pi_ {\operatorname {r e f}} \left(s _ {k} \mid s _ {< k} , \mathbf {H} ^ {\mathrm {A V}}\right)} + \beta \log Z \left(s _ {< k}, \mathbf {H} ^ {\mathrm {A V}}\right) \tag {4} +$$ + +where $\pi_{\theta}$ , $\pi_{\mathrm{ref}}$ , $\beta$ and $Z(\cdot)$ are the LLM policy, reference policy, a parameter controlling the deviation from $\pi_{\mathrm{ref}}$ , and the partition function as in Rafailov et al. (2024) respectively. $Z(s_{ p_{s_{k}^{\prime}})$ . Alternatively, $\alpha_{k} = \sigma ((p_{s_{k}} - p_{s_{k}^{\prime}}) / \mu)$ can be used as soft labels for DPO to accommodate the estimation noise introduced by the limited number of rollouts in $p_{s_k}$ , where $\mu$ is the calibration hyper-parameter determining how much we believe the process annotations. As a result, pDPO retains the advantages of PPRM while offering finer modelling granularity at each step. In practice, pDPO is integrated with PPRM to construct complete reasoning paths, enhancing overall performance. While PPRM + +enables full-solution-level preference training, ensuring the generation of entire solutions, pDPO complements it by providing fine-grained, step-level preference guidance. + +# 4.3. Contrastive Step Selection + +While rollouts allow automatic process annotation, the computational cost can be high when the numbers of rollouts and steps grow. However, in pDPO, certain steps are more error-prone and hence more valuable to be optimized than others. For general video understanding, by examining a held-out validation set for reasoning paths with wrong answers, we found that over $70\%$ of the reasoning errors occur at steps where the model misinterprets or hallucinates the video content. Therefore, we choose to particularly focus the pDPO on optimizing those steps. + +To locate those steps, we quantify the susceptibility of each reasoning step to the input video by applying a tiny perturbation to the input video and measuring the length-normalized per-token KL divergence. Specifically, as shown in the top part of Fig. 3, for each step $s_k$ we compute the length-normalized KL-divergence by + +$$ +d _ {s _ {k}} = \frac {1}{| s _ {k} |} \sum_ {y _ {i} \in s _ {k}} D _ {\mathrm {K L}} \Big (P (y _ {i} | y _ {< i}, \mathbf {H} ^ {\mathrm {A V}}) | | P (y _ {i} | y _ {< i}, \tilde {\mathbf {H}} ^ {\mathrm {A V}}) \Big), +$$ + +where $y_{i}$ are wordpiece tokens and $D_{\mathrm{KL}}(\cdot)$ computes the KL-divergence between the output distributions with the original inputs $\mathbf{H}^{\mathrm{AV}}$ and perturbed inputs $\tilde{\mathbf{H}}^{\mathrm{AV}}$ . A higher $d_{s_k}$ indicates that the reasoning step $s_k$ is more susceptible to small input change, and this high susceptibility is likely to yield more diverged subsequent steps. We select the top $T$ steps with the highest $d_{s_k}$ to perform pairwise rollout. While this selection biases pDPO training towards video-dependent errors, the other text-based logic errors can be accommodated by PPRM with entire reasoning paths. + +# 5. Audio-visual Reasoning Benchmark + +The RivaBench is proposed to extend the scope of complex video understanding with three new reasoning-intensive application scenarios, including academic presentation (Academic), stand-up comedy (StandUp) and synthetic video detection (SynthDec). The statistics of videos for each scenario partition are shown in Table 1. + +The Academic partition is based on the M3AV (Chen et al., 2024) test set containing recordings of conference or lecture presentations spanning five different domains. Human experts with mathematical, engineering and medical backgrounds are recruited to provide questions, answers and detailed explanations based on the video clips. Example annotations are shown in Figs. 9 and 10 in Appendix C. + +While humour in videos has been explored from a descriptive perspective (Hyun et al., 2024; Liu et al., 2024b; Xie + +Table 1. RivaBench basic statistics. The duration is given by mean $\pm$ standard deviation. The SynthDec split contains 100 synthetic videos and 100 real videos that human annotators search to have similar content as synthetic videos. MCQ stands for multiple-choice questions. Video sources are all from YouTube. + +
AttributeAcademicStandUpSynthDec
Num. of QA1,9122,128200
Duration (s)47.2±66.143.2±15.18.1±3.2
Format5-way MCQ5-way MCQYes/No
+ +et al., 2024), the StandUp partition of RivaBench explores from an audio-visual reasoning perspective. Specifically, instead of prompting the model to list all funny elements in the video, we particularly focus on understanding why a certain punchline is interesting and task the human annotators to set questions that require reasoning about the comedian's gestures, facial expression and speech content. Human annotators provide questions, answers and explanations (with automatically generated confusing choices), as shown in Figs. 7 and 8 in Appendix B. Subtitles exist in around $50\%$ of the videos in the StandUp partition but not in the Academic partition. However, due to the low frame sampling rate and hence subtitles are mostly incomplete, the model has to leverage both audio and visual information. This is reflected by the fact that GPT-4o (visual only) is worse than Gemini-1.5-pro (audio-visual) in Table 2. + +This paper proposes the SynthDec partition for synthetic video detection, which has great potential since video generation models are becoming increasingly powerful. This task requires LLM to classify whether a given video clip is real or synthetic by finding clues in the video such as motions violating physics rules or objects being distorted. Videos are generated using the Hunyuan-large model (Sun et al., 2024c) (see examples in Figs. 11 and 12). This is a challenging task that requires both logical reasoning and accurate perception of video content. The SynthDec partition can also serve as the performance indicator for reward models used to train video generators in the future. + +# 6. Experimental Setup + +# 6.1. Model and Training Specifications + +video-SALMONN-o1 is built based on the SigLIP (Zhai et al., 2023) visual encoder and Qwen 2 with 7B parameters backbone LLM. Two linear layers with GELU activation function are used (Hendrycks & Gimpel, 2016) as the visual aligner. The model processes videos at a 2-frame-per-second rate with a maximum of 60 frames. + +The Whisper-Large-v3 encoder (Radford et al., 2023) is used as the audio encoder, and the window-level Q-Former (Tang et al., 2024c) with a window length of 0.2 seconds is + +![](images/b8029df0f5ded031db12ff4a08dbb3d1055c7f297b5a93dd701f96749efdb7c6.jpg) +Figure 4. Distributions of the numbers of reasoning steps in SFT data. Left: Distribution of the entire SFT data. Right: Distribution on the reasoning-intensive subset of SFT data. Due to the difficulty of the reasoning-intensive subset, more reasoning steps are required in general for samples in this set. + +used as the audio aligner, producing 150 audio tokens for every 30 seconds. We set LoRA hyper-parameters $r = 64$ and $\alpha = 256$ for the backbone LLM for both SFT and pDPO. During training, the visual encoder and aligner, audio encoder, and LLM remain frozen. SFT is performed on $16 \times \mathrm{A}100$ GPUs for 48 hours and pDPO is trained with $8 \times \mathrm{A}100$ GPUs for 24 hours. Prompts used for reasoning are shown in Appendix E. The code, SFT data, pDPO data and model checkpoints will be released. + +# 6.2.Data + +Following Tang et al. (2024a), the audio modality alignment stage employs LibriSpeech-960h (Panayotov et al., 2015) ASR data and AudioCaps (Kim et al., 2019) audio caption data to train the audio aligner. During the audio-visual SFT stage, 13k videos with rich audio information are selected with high-quality audio-visual captions. Around 150k normal question-answer (QA) pairs are directly generated using GPT-4o by providing detailed audio-visual captions, and an additional subset of 30k reasoning-intensive SFT QA pairs are generated with the proposed data generation pipeline. Each QA, regardless of the difficulty, is associated with reasoning steps, and the distributions of the numbers of reasoning steps for the QA pairs used for SFT are shown in Fig. 4. Both captions and QA pairs are used for SFT. + +The reasoning-intensive subset is used to collect the data for pDPO training by sampling 10 paths for each QA. The QA pairs where the SFT model generates incorrect solutions are retained to perform rollouts and others that only contain correct solutions are discarded. For complete solutions, instead of directly comparing the paths (Zhang et al., 2024a), we compare each pair of solutions against the reference answer using GPT-4o and choose the one closer to the reference as the preferred solution. For intermediate steps, we choose the top 3 steps based on contrastive step selection, and 6 rollouts are performed for each chosen step. As a result, $\sim 100\mathrm{k}$ pairs of complete solutions from $5\mathrm{k}$ video clips are + +selected, and an extra 100k pairs of step-level partial solution pairs from these complete solutions are used for pDPO. Greedy decoding is used during inference and the effort in generating data using pDPO only affects training time and has no impact to the latency of the model during inference. + +Besides RivaBench, video-SALMONN-o1 is also evaluated on Video-MME (Fu et al., 2024) and NExT-QA (Xiao et al., 2021) benchmarks with challenging reasoning questions where the former is an audio-visual task and the latter focuses on visual information only. For consistency, paired audios are also provided for NExT-QA videos if they exist. Note that the synthetic video detection task is never seen in model training, and hence is a zero-shot emergent ability. + +# 7. Results + +# 7.1. Main Results + +The main results on VideoMME, NExT-QA and the RivaBench are shown in Table 2. No subtitles are given to any of the models under test for VideoMME. As performance references, we include GPT-4o (checkpoint at 2024-08-06) and Gemini-1.5-pro, with their results on VideoMME as reported in Fu et al. (2024). When testing GPT-4o with videos, each video is split into images at a frame rate of 2 fps with a maximum of 30 frames due to token limitation, and the sequence of images is sent as the input. For open-source models, we compare video-SALMONN-o1 to LLaVA-OneVision (Li et al., 2024a) (same visual encoder and LLM backbone), together with video-SALMONN (Sun et al., 2024b) and Video-LLaMA 2 (Cheng et al., 2024) as the two most recent audio-visual LLMs. + +Proprietary LLM performance on RivaBench: For the two proprietary LLMs, GPT-4o underperforms Gemini-1.5-pro on StandUp and Academic test sets due to the lack of audio information. This indicates that RivaBench provides challenging questions that require more audio-visual joint understanding compared to VideoMME. On the SynthDec set, since only the visual part is synthesized, GPT-4o demonstrated a stronger ability. Moreover, by performing reasoning with GPT-4o and Gemini-1.5-pro, larger improvements are found on StandUp and Academic test sets than VideoMME and NExT-QA, indicating the necessity of reasoning on RivaBench. + +Open-source LLM performance comparison: Audiovisual SFT on video-SALMONN-o1 already yields better performance than LLaVA-OneVision on VideoMME due to the ability to comprehend speech and audio information, whereas no obvious improvements are found on the other benchmarks. The main improvements on other benchmarks come from pDPO, which achieved $4.1\%$ , $8.1\%$ and $5.8\%$ absolute accuracy improvements on NExT-QA, StandUp and Academic test sets respectively compared to the SFT + +Table 2. Main results of video-SALMONN-o1 compared against other visual (V) and audio-visual $(\mathrm{A} + \mathrm{V})$ LLMs. SFT refers to the model after SFT with reasoning data and pDPO refers to the model obtained after training with pDPO based on the same SFT model. F1-score (Precision/Recall) is reported for SynthDec and accuracy is reported for others. Results with $\dagger$ are directly taken from the corresponding papers. video-SALMONN-o1 performs reasoning during inference and other open-source models give answers directly. + +
ModelModalityVideoMMENExT-QAStandUpRivaBench
AcademicSynthDec (P/R)
Proprietary models
Gemini-1.5-pro (Team et al., 2024)A+V75.0%†79.2%75.8%67.1%23.6% (55%/15%)
Gemini-1.5-pro+reasoningA+V75.1%79.5%81.8%69.5%40.0% (49%/34%)
GPT-4o (OpenAI Team, 2024)V71.9%†81.7%63.3%60.0%34.1%(90%/21%)
GPT-4o+reasoningV72.1%81.9%69.6%61.0%25.8%(53%/17%)
Open-source baselines
LLaVA-OneVision (Li et al., 2024a)V58.2%†79.4%†67.2%45.8%0.0%(97%/0%)
video-SALMONN (Sun et al., 2024b)A+V43.3%49.2%47.8%33.6%0.0%(100%/0%)
Video-LLaMA 2.1 (Cheng et al., 2024)A+V54.9%†75.6%53.7%34.3%0.0%(99%/0%)
video-SALMONN-o1 (ours, SFT)A+V62.9%78.2%68.6%42.5%5.8%(97%/5%)
video-SALMONN-o1 (ours, pDPO)A+V65.6%82.3%76.7%48.3%17.8%(87%/13%)
+ +Table 3. Effect of different parts of the audio-visual SFT data on VideoMME, Academic and StandUp test sets. Underscore for second-best results. "w/o reasoning-intensive part" means removing the reasoning-intensive SFT data, and "w/o any reasoning" always directly outputting answers during SFT. "Reasoning-intensive part only" always performs reasoning for QA. + +
Training DataInference ReasoningVideoMMENExT-QAAcademicStandUp
Full SFT dataX63.7%80.7%45.2%72.3%
Full SFT data62.9%78.2%42.5%68.6%
w/o any reasoningX63.2%81.0%44.1%71.1%
w/o reasoning-intensive partX62.7%78.9%44.7%71.5%
w/o reasoning-intensive part61.6%76.6%42.3%67.5%
Reasoning-intensive part only58.8%75.2%40.1%63.5%
Full SFT data + pDPO65.6%82.3%48.3%76.7%
+ +model. Larger improvements are found on the RivaBench with $6 - 8\%$ absolute accuracy improvements obtained compared to LLaVA-OneVision, and video-SALMONN-o1 even performs better on the StandUp test set than Gemini-1.5-pro without reasoning. Besides, compared to other audio-visual LLMs, video-SALMONN-o1 exhibits better interpretability of the model output, and the cause of mistakes can be located by analyzing the reasoning process. We also provide the comparison between video-SALMONN-o1 and Gemini-1.5-pro on VideoHallucer benchmark (Wang et al., 2024c) in Appendix I for the hallucination aspect. + +Zero-shot synthetic video detection: video-SALMONN-ol achieves zero-shot synthetic video detection ability while other open-source models output "real" all the time, which also benefit from a better explanation with examples of anomalies in synthesized videos in the prompt. However, even for the videos where the motions obviously violate physics rules, current state-of-the-art video LLMs still fail + +to detect most of the time. + +In addition, two qualitative examples are shown in Figs. 16 and 17 in Appendix G, where LlaVA-OneVison (and also other audio-visual models) are unable to provide the reasoning steps and the final answer is completely biased to "Real". On the other hand, video-SALMONN-o1 can look for distortions in the video as part of its reasoning process, leading to the correct identification of synthesized videos. + +# 7.2. Effect of SFT Data + +The audio-visual SFT data is crucial for video-SALMONNol to gain the initial audio-visual reasoning ability, and the effect of different data partitions is shown in Table 3. + +Direct answer outperforms reasoning after SFT: Directly outputting a short answer or an option has been the dominating output mode for audio-visual LLMs on general video understanding, a major difference to math questions. Com + +Table 4. Effect of different reward modelling methods on VideoMME, NExT-QA, the StandUp and Academic split of RivaBench. Major@20 and RM@20 are evaluated following Zhang et al. (2024a), where Major@20 refers to the accuracy under majority voting with 20 sampled paths, and RM@20 is the best-of-n with 20 samples. Samples are all generated from the model after SFT. pDPO with full paths only uses preference pairs of complete reasoning paths. + +
Training ConfigurationInferenceVideoMMENExT-QAStandUpAcademic
SFT1-best62.9%78.2%68.6%42.5%
SFTMajor@2063.5%81.5%73.5%45.3%
SFT + ORMRM@2062.7%78.5%69.0%42.6%
SFT + PRMRM@2063.5%79.3%72.1%43.9%
SFT + pDPO1-best65.6%82.3%76.7%48.3%
+ +paring row 2 to row 1 in Table 3, when using all the SFT data including the reasoning-intensive part, the model after SFT is still better at directly generating the answer than performing reasoning. This is due to the exposure bias in teacher forcing which has a much higher impact on the reasoning paths as they are much longer sequences. By learning on its own samples, pDPO mitigates this exposure bias and achieves consistently better performance than the SFT model. Next, comparing row 1 to row 3 and row 4 in Table 3, when directly outputting the answer during inference, incorporating reasoning steps in SFT does not always yield an improvement on videoMME and NExT-QA, despite being slightly helpful on RivaBench. + +The reasoning-intensive part is important: When excluding the reasoning-intensive part, there is a clear degradation in model performance with reasoning during inference, showing the importance of this part of data to enable a better reasoning performance. However, when only using the reasoning-intensive part for SFT, the model struggles to acquire the fundamental audio-visual perception abilities, yielding sub-optimal performance. + +# 7.3. Effect of pDPO Training + +We then analyse different reward modelling techniques for the model performance in Table 4. In addition to pairwise preference models, we include ORM and PRM as proposed in Lightman et al. (2023) as follows: + +ORM: A projection layer is added to LLM output states and projects the last output state to a scalar. which is then passed through a sigmoid activation function to predict 1 if the final answer is correct, and 0 otherwise. + +PRM: A projection layer is added to LLM output states and projects the state at the end of each step to a scalar with a sigmoid to predict $\mathbb{1}(p_{s_k} > 0)$ . The score of each solution is the lowest score among all steps (Wang et al., 2024b). + +Both ORM and PRM are initialized with video-SALMONNol after the SFT stage. Best-of-n is used for ORM and PRM where 20 sampled solutions are generated from the SFT model and the top one with the highest score is selected. + +![](images/69f1077316ff0ba7f9b67792afb65c4c631f39ff88e0c3e943fe7673e8315b4e.jpg) +Figure 5. Comparison between different top T steps selected for pDPO. Pairs of full solution paths are always used in addition to pairs of intermediate steps. + +Moreover, majority voting among the 20 samples is used as a baseline which is consistently marginally better than the 1-best solution across all test sets. + +While ORM showed mixed results compared to the 1-best solution from the SFT model, PRM showed consistent but marginal improvements and is on par with majority voting. The training loss of PRM and ORM only dropped about $5\%$ , which reflects the difficulty of learning raw scores for general video understanding tasks. Last, comparing the models above against the pDPO model, the use of pairwise preference models is much more effective compared to predicting the raw score, showing the difficulty of direct raw score modelling in general video QAs. Qualitative examples comparing answers between SFT and pDPO are provided in Figs. 13 to 15 in Appendix F. + +Effect of Contrastive Step Selection. To analyze the effect of the number of steps selected for pairwise training, we conducted experiments without intermediate pairs of steps and with all intermediate pairs, in addition to using the top three steps from the contrastive step selection. The comparisons are given in Fig. 5. Using intermediate steps in pDPO achieved further consistently improves model performance compared to only using the full solutions, especially on questions that require frequent reference to the video or audio information at intermediate reasoning steps. A case study qualitatively showing the effect of contrastive step selection is included in Appendix H. + +# 8. Conclusions + +We propose video-SALMONN-o1, the first open-source audio-visual LLM with enhanced reasoning abilities. video-SALMONN-o1 is the first to explore reasoning process optimization for general video understanding and proposes the pDPO method with an efficient contrastive step selection algorithm. To further evaluate the reasoning abilities of audio-visual LLMs, the RivaBench is introduced with innovative and challenging tasks and over 4000 high-quality human expert annotations. video-SALMONN-o1 consistently outperforms the strong LLaVA-OneVision baseline with $3 - 8\%$ absolute accuracy improvements. pDPO training consistently outperformed the SFT model. Moreover, video-SALMONN-o1 showed zero-shot synthetic video detection abilities as a result of the enhanced reasoning abilities. + +# Impact Statement + +By enhancing reasoning abilities in general video understanding, video-SALMONN-o1 provides a more transparent and interpretable interface that is compatible with general videos to access and explain model responses and behaviours. This is indispensable to ensure the reliability of LLMs when applied to different video understanding scenarios and will be largely beneficial for pinpointing the specific causes or errors when the model generates dubious or toxic contents, thus enhancing AI safety. + +The approaches in this paper do not give rise to any additional potential biases beyond the ones directly inherited from the pre-trained model checkpoints used. The audio encoder and visual encoder might work worse for people from particular demographics. The framework also inherits biases from all the LLMs used in this paper. To mitigate potential biases, we clearly describe the nature of each dataset and provide clear and adequate references to all the resources we used for video-SALMONN-o1. + +The ability of video-SALMONN-o1 to understand speech in videos could lead to potential technology abuses like surveillance and eavesdropping. To counter this, we've consulted with legal experts to establish clear usage guidelines, reducing risks and addressing concerns, highlighting our dedication to responsible research sharing. + +# References + +Alamri, H., Cartillier, V., Das, A., Wang, J., Cherian, A., Essa, I., Batra, D., Marks, T. K., Hori, C., and Anderson, P. Audio visual scene-aware dialog. In Proc. CVPR, 2019. +Chen, S., He, X., Guo, L., Zhu, X., Wang, W., Tang, J., and Liu, J. VALOR: Vision-audio-language omni-perception pretraining model and dataset. arXiv preprint + +arXiv:2304.08345, 2023a. +Chen, S., Li, H., Wang, Q., Zhao, Z., Sun, M., Zhu, X., and Liu, J. Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset. 2023b. +Chen, Z., Liu, H., Yu, W., Sun, G., Liu, H., Wu, J., Zhang, C., Wang, Y., and Wang, Y. M3AV: A multimodal, multi-genre, and multipurpose audio-visual academic lecture dataset. In Proc. ACL, 2024. +Cheng, Z., Leng, S., Zhang, H., Xin, Y., Li, X., Chen, G., Zhu, Y., Zhang, W., Luo, Z., Zhao, D., and Bing, L. VideoLLaMA 2: Advancing spatial-temporal modeling and audio understanding in Video-LLMs. arXiv preprint arXiv:2406.07476, 2024. +Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems. arXiv:2110.14168, 2021. +DeepSeek Team. Deepseek-r1-lite-preview is now live: unleashing supercharged reasoning power, 2024. +Du, Y., Ma, Z., Yang, Y., Deng, K., Chen, X., Yang, B., Xiang, Y., Liu, M., and Qin, B. CoT-ST: Enhancing LLM-based speech translation with multimodal chain-of-thought. arXiv:2409.19510, 2024. +Du, Y., Liu, Z., Li, Y., Zhao, W. X., Huo, Y., Wang, B., Chen, W., Liu, Z., Wang, Z., and Wen, J.-R. Virgo: A preliminary exploration on reproducing o1-like mllm. arXiv:2501.01904, 2025. +Fang, X., Mao, K., Duan, H., Zhao, X., Li, Y., Lin, D., and Chen, K. MMBench-Video: A long-form multi-shot benchmark for holistic video understanding. arXiv preprint arXiv:2406.14515, 2024. +Feng, X., Wan, Z., Wen, M., Wen, Y., Zhang, W., and Wang, J. Alphazero-like tree-search can guide large language model decoding and training. In Proc. ICML, 2024. +Fu, C., Dai, Y., Luo, Y., Li, L., Ren, S., Zhang, R., Wang, Z., Zhou, C., Shen, Y., Zhang, M., et al. Video-MME: The first-ever comprehensive evaluation benchmark of multi-modal LLMs in video analysis. arXiv preprint arXiv:2405.21075, 2024. +Goyal, S., Ji, Z., Rawat, A. S., Menon, A. K., Kumar, S., and Nagarajan, V. Think before you speak: Training language models with pause tokens. In Proc. ICLR, 2024. +Guo, J., Zheng, T., Bai, Y., Li, B., Wang, Y., Zhu, K., Li, Y., Neubig, G., Chen, W., and Yue, X. MAmmoTH-VL: Eliciting multimodal reasoning with instruction tuning at scale. arXiv:2412.05237, 2024. + +Hao, S., Gu, Y., Ma, H., Hong, J. J., Wang, Z., Wang, D. Z., and Hu, Z. Reasoning with language model is planning with world model. In Proc. EMNLP, 2023. +Hendrycks, D. and Gimpel, K. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415, 2016. +Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. LoRA: Low-rank adaptation of large language models. In Proc. ICLR, 2022. +Hyun, L., Sung-Bin, K., Han, S., Yu, Y., and Oh, T.-H. SMILE: Multimodal dataset for understanding laughter in video with language models. In Duh, K., Gomez, H., and Bethard, S. (eds.), Proc. NAACL Findings, 2024. +Kim, C. D., Kim, B., Lee, H., and Kim, G. AudioCaps: Generating captions for audiios in the wild. In Proc. NAACL-HLT, 2019. +Li, B., Zhang, Y., Guo, D., Zhang, R., Li, F., Zhang, H., Zhang, K., Zhang, P., Li, Y., Liu, Z., and Li, C. LLaVA-OneVision: Easy visual task transfer. arXiv:2408.03326, 2024a. +Li, G., Wei, Y., Tian, Y., Xu, C., Wen, J.-R., and Hu, D. Learning to answer questions in dynamic audio-visual scenarios. In Proc. CVPR, 2022. +Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Liu, Y., Wang, Z., Xu, J., Chen, G., Luo, P., et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proc. CVPR, 2024b. +Li, L., Lei, J., Gan, Z., Yu, L., Chen, Y.-C., Pillai, R., Cheng, Y., Zhou, L., Wang, X. E., Wang, W. Y., et al. Value: A multi-task benchmark for video-and-language understanding evaluation. arXiv preprint arXiv:2106.04632, 2021. +Li, S., Li, L., Ren, S., Liu, Y., Liu, Y., Gao, R., Sun, X., and Hou, L. Vitatecs: A diagnostic dataset for temporal concept understanding of video-language models. arXiv preprint arXiv:2311.17404, 2023a. +Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J.-G., and Chen, W. Making language models better reasoners with step-aware verifier. In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proc. ACL, 2023b. +Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K. Let's verify step by step. arXiv:2305.20050, 2023. +Lin, B., Zhu, B., Ye, Y., Ning, M., Jin, P., and Yuan, L. Video-LLaVA: Learning united visual representation by alignment before projection. In Proc. CVPR, 2024. + +Liu, Y., Li, S., Liu, Y., Wang, Y., Ren, S., Li, L., Chen, S., Sun, X., and Hou, L. Tempcompass: Do video LLMs really understand videos? arXiv preprint arXiv:2403.00476, 2024a. +Liu, Z.-S., Courant, R., and Kalogeiton, V. FunnyNet-W: Multimodal learning of funny moments in videos in the wild. arXiv:2401.04210, 2024b. +Luo, L., Liu, Y., Liu, R., Phatale, S., Guo, M., Lara, H., Li, Y., Shu, L., Zhu, Y., Meng, L., Sun, J., and Rastogi, A. Improve mathematical reasoning in language models by automated process supervision. arXiv:2406.06592, 2024. +Mangalam, K., Akshulakov, R., and Malik, J. Egoschema: A diagnostic benchmark for very long-form video language understanding. In Proc. NeurIPS, 2023. +Ning, M., Zhu, B., Xie, Y., Lin, B., Cui, J., Yuan, L., Chen, D., and Yuan, L. Video-bench: A comprehensive benchmark and toolkit for evaluating video-based large language models. arXiv preprint arXiv:2311.16103, 2023. +OpenAI. Learning to reason with large language models, 2024. +OpenAI Team. Gpt-4o system card. arXiv:2410.21276, 2024. +Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. Librispeech: An ASR corpus based on public domain audio books. In Proc. ICASSP, 2015. +Qwen Team. To see the world with wisdom, 2024a. +Qwen Team. QwQ: Reflect deeply on the boundaries of the unknown, 2024b. +Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., and Sutskever, I. Robust Speech Recognition via Large-scale Weak Supervision. In Proc. ICML, 2023. +Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct Preference Optimization: Your language model is secretly a reward model. In Proc. NeurIPS, 2024. +Snell, C., Lee, J., Xu, K., and Kumar, A. Scaling LLM test-time compute optimally can be more effective than scaling model parameters. arXiv:2408.03314, 2024. +Sun, G., Manakul, P., Liusie, A., Pipatanakul, K., Zhang, C., Woodland, P., and Gales, M. CrossCheckGPT: Universal hallucination ranking for multimodal foundation models. arXiv preprint arXiv:2405.13684, 2024a. +Sun, G., Yu, W., Tang, C., Chen, X., Tan, T., Li, W., Lu, L., MA, Z., Wang, Y., and Zhang, C. video-SALMONN: Speech-enhanced audio-visual large language models. In Proc. ICML, 2024b. + +Sun, X., Chen, Y., Huang, Y., et al. Hunyuan-Large: An open-source moe model with 52 billion activated parameters by tencent. arXiv preprint arXiv:2411.02265, 2024c. +Sun, Z., Yu, L., Shen, Y., Liu, W., Yang, Y., Welleck, S., and Gan, C. Easy-to-hard generalization: Scalable alignment beyond human supervision. In Proc. NeurIPS, 2024d. +Tang, C., Li, Y., Yang, Y., Zhuang, J., Sun, G., Li, W., Ma, Z., and Zhang, C. Enhancing multimodal LLM for detailed and accurate video captioning using multi-round preference optimization. arXiv:2410.06682, 2024a. +Tang, C., Yu, W., Sun, G., Chen, X., Tan, T., Li, W., Lu, L., MA, Z., and Zhang, C. SALMONN: Towards generic hearing abilities for large language models. In Proc. ICLR, 2024b. +Tang, C., Yu, W., Sun, G., Chen, X., Tan, T., Li, W., Lu, L., Ma, Z., and Zhang, C. Extending large language models for speech and audio captioning. In Proc. ICASSP, 2024c. +Team, G., Anil, R., Borgeaud, S., Alayrac, J.-B., Yu, J., et al. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2024. +Uesato, J., Kushner, N., Kumar, R., Song, F., Siegel, N., Wang, L., Creswell, A., Irving, G., and Higgins, I. Solving math word problems with process- and outcome-based feedback. arXiv:2211.14275, 2022. +Wang, P., Bai, S., Tan, S., Wang, S., Fan, Z., Bai, J., Chen, K., Liu, X., Wang, J., Ge, W., Fan, Y., Dang, K., Du, M., Ren, X., Men, R., Liu, D., Zhou, C., Zhou, J., and Lin, J. Qwen2-VL: Enhancing vision-language model's perception of the world at any resolution. arXiv:2409.12191, 2024a. +Wang, P., Li, L., Shao, Z., Xu, R., Dai, D., Li, Y., Chen, D., Wu, Y., and Sui, Z. Math-Shepherd: Verify and reinforce LLMs step-by-step without human annotations. In Proc. ACL, 2024b. +Wang, Y., Wang, Y., Zhao, D., Xie, C., and Zheng, Z. Videohallucer: Evaluating intrinsic and extrinsic hallucinations in large video-language models. arXiv:2406.16338, 2024c. +Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., and Zhou, D. Chain-of-thought prompting elicits reasoning in large language models. In Proc. NeurIPS, 2022. +Xiao, J., Shang, X., Yao, A., and Chua, T.-S. Next-QA: Next phase of question-answering to explaining temporal actions. In Proc. CVPR, 2021. + +Xie, B., Zhang, S., Zhou, Z., Li, B., Zhang, Y., Hessel, J., Yang, J., and Liu, Z. FunQA: Towards surprising video comprehension. arXiv:2306.14899, 2024. +Xu, G., Jin, P., Li, H., Song, Y., Sun, L., and Yuan, L. +LLaVA-CoT: Let vision language models reason step-by-step. arXiv:2411.10440, 2024. +Yang, A., Zhang, B., Hui, B., Gao, B., Yu, B., Li, C., Liu, D., Tu, J., Zhou, J., Lin, J., Lu, K., Xue, M., Lin, R., Liu, T., Ren, X., and Zhang, Z. Qwen2.5-Math technical report: Toward mathematical expert model via self-improvement. arXiv:2409.12122, 2024. +Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K. Tree of Thoughts: Deliberate problem solving with large language models. In Proc. NeurIPS, 2023. +Ying, H., Zhang, S., Li, L., Zhou, Z., Shao, Y., Fei, Z., Ma, Y., Hong, J., Liu, K., Wang, Z., Wang, Y., Wu, Z., Li, S., Zhou, F., Liu, H., Zhang, S., Zhang, W., Yan, H., Qiu, X., Wang, J., Chen, K., and Lin, D. InternLM-Math: Open math large language models toward verifiable reasoning. arXiv:2402.06332, 2024. +Yu, F., Gao, A., and Wang, B. OVM, outcome-supervised value models for planning in mathematical reasoning. In Duh, K., Gomez, H., and Bethard, S. (eds.), Proc. NAACL Findings, 2024a. +Yu, T., Zhang, H., Li, Q., Xu, Q., Yao, Y., Chen, D., Lu, X., Cui, G., Dang, Y., He, T., Feng, X., Song, J., Zheng, B., Liu, Z., Chua, T.-S., and Sun, M. Rlaif-v: Open-source ai feedback leads to super gpt-4v trustworthiness. arXiv preprint arXiv:2405.17220, 2024b. +Yuan, L., Cui, G., Wang, H., Ding, N., Wang, X., Deng, J., Shan, B., Chen, H., Xie, R., Lin, Y., Liu, Z., Zhou, B., Peng, H., Liu, Z., and Sun, M. Advancing LLM reasoning generalists with preference trees. arXiv:2404.02078, 2024. +Yun, H., Yu, Y., Yang, W., Lee, K., and Kim, G. Pano-AVQA: Grounded audio-visual question answering on 360deg videos. In Proc. ICCV, 2021. +Zhai, X., Mustafa, B., Kolesnikov, A., and Beyer, L. Sig-moid loss for language image pre-training. arXiv preprint arXiv:2303.15343, 2023. +Zhang, D., Wu, J., Lei, J., Che, T., Li, J., Xie, T., Huang, X., Zhang, S., Pavone, M., Li, Y., Ouyang, W., and Zhou, D. LLaMA-Berry: Pairwise optimization for o1-like Olympiad-level mathematical reasoning. arXiv:2410.02884, 2024a. + +Zhang, L., Hosseini, A., Bansal, H., Kazemi, M., Kumar, A., and Agarwal, R. Generative verifiers: Reward modeling as next-token prediction. arXiv:2408.15240, 2024b. +Zhang, R., Gui, L., Sun, Z., Feng, Y., Xu, K., Zhang, Y., Fu, D., Li, C., Hauptmann, A., Bisk, Y., and Yang, Y. Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward. arXiv preprint arXiv:2404.01258, 2024c. +Zhang, R., Gui, L., Sun, Z., Feng, Y., Xu, K., Zhang, Y., Fu, D., Li, C., Hauptmann, A., Bisk, Y., and Yang, Y. Direct preference optimization of video large multimodal models from language model reward. arXiv:2404.01258, 2024d. +Zhang, Y., Wu, J., Li, W., Li, B., Ma, Z., Liu, Z., and Li, C. Video instruction tuning with synthetic data. arXiv:2410.02713, 2024e. +Zhang, Y., Wu, S., Yang, Y., Shu, J., Xiao, J., Kong, C., and Sang, J. o1-Coder: An o1 replication for coding. arXiv:2412.00154, 2024f. +Zhao, Y., Yin, H., Zeng, B., Wang, H., Shi, T., Lyu, C., Wang, L., Luo, W., and Zhang, K. Marco-o1: Towards open reasoning models for open-ended solutions. arXiv:2411.14405, 2024. + +# A. Reasoning SFT Data Example + +![](images/76032fc996f52a1c3da5342ba899f3b489a3dbfc11abb6b4b80a182250de4282.jpg) +Speech: $99\%$ of smart phones are either made of glass or plastic, but which one's better... + +Question: If the item is frequently handled but almost no dropping risk, which type of material should he choose? + +Answer: The glass one + +# Reasoning: + +1. The question focuses on two aspects: (1) frequent handling, which implies a higher risk of scratches, and (2) minimal dropping risks, suggesting... +2. Analyze the first test (scratch test): ... +3. Analyze the second test (dropping test): ... +4. Compare the two test under the given scenario: ... +5. Conclude that under the given scenario, the glass one is preferred +6. The final answer: the glass one + +# B. StandUp Data Examples + +Two examples of the StandUp part of RivaBench are shown in Fig. 7 and 8 respectively. + +![](images/90ff2d4022c4cb49096d7323a5a10256c545bd1da7c0f72c3f4b41c5e28ea701.jpg) +Figure 6. Example of reasoning SFT data +Figure 7. Example of StandUp part of the RivaBench. + +Speech and audio: Thank you guys. Are you guys trying to judging my attractive level? It's weird level. Frustrating, and I find that out from things people say to me like, my friend's mom, like, "Chelsea! I saw an ugly Chelsea on the street the other day!". Basically that translate to me like you saw an ugly person on the street and you thought about me. [audience laughed] + +Question: Why did the audience laugh when the comedian mentioned her friend mom saying she saw an ugly person on the street? + +Answer: She is using self-deprecating humor to make a funny situation. + +Explain: The audience laughs because the person uses self-deprecating humor to make a funny situation. By sharing the comment from their friend's mom, who saw an ugly person on the street and thought of her, the person is employing self-mockery. + +# Choices: + +A. A self-deprecating way to make a funny situation +B. The comedian is known for having many identical-looking friends +C. The comedian's outfit is considered ugly +D. The street the comedian's friend's mom mentioned is known for attracting strange people +E. The comedian's friend's mom is known for having bad eye-sight + +![](images/4db107a322279fd6eaeffc48bfab471493802e2df347b6e9f326fa48aeed1507.jpg) +Figure 8. Example of StandUp part of the RivaBench. + +Speech and audio: I had so many, um, culture shocks since I moves here. For example as a foreigner I really hate tipping. [silence] But as a bartender I really hate foreigners. [audience laughed] + +Question: Why did the audience laugh when she said "as a bartender I really hate foreigners"? + +Answer: She uses a technique called "reversal" to make contrast + +Explain: The audience laughs because the comedian employs a linguistic and comedic technique known as \\"reversal\" or \\"role inversion.\\" By stating \\"as a bartender I really hate foreigners,\\" the comedian subverts the statement as a foreigner I really hate tipping, which is a humorous and unexpected twist. + +# Choices: + +A. The comedian was drinking alcohol on stage. +B. The comedian made a funny face while saying it +C. The comedian used a comedic technique called "reversal" +D. The audience agreed with her statement +E. The audience misunderstood her accent + +# C. Academic Data Examples + +Two examples of the Academic part of RivaBench are shown in Fig. 9 and 10 respectively. + +![](images/272fcdc3a3122eca9990d25a3fb957a556f927b7d1ccdec213da353655ae4c0c.jpg) +Figure 9. Example of Academic part of the RivaBench. + +Speech and audio:...In which she's used a creative use of the twin pair kind of study design, and this is sort of interesting because you can. Because you can, because of their twins, who are discordant for Tbi and discordant for for onset of dementia. you control for early genetics, you control for early life exposures. And what Brenda has found is that, yes, indeed, Tbi is increased with with risk of dementia, about a $60\%$ increase exactly the. The same kind of effect size we've been finding, but it wasn't Alzheimer's. It seems like it's non Alzheimer's disease dementia. And I think we need to understand more. What is that? Is it vascular is? Is it Cte that's now met with aging? We don't really know... + +Question: How does the twin study isolate TBI's impact on dementia risk? + +Answer: By analyzing twins with inconsistent timing of onset of TBI and dementia. + +Explain: The increased risk of $"\text{all - cause dementia} \backslash "$ and $"\text{non - Alzheimer's dementia} \backslash "$ mentioned in the slides suggests an association between TBI and an increased risk of dementia, particularly non-Alzheimer's type dementia...By analyzing inconsistent pairs, researchers can more accurately assess the association between TBI and dementia, thereby providing a scientific basis for prevention and intervention. + +# Choices: + +A. By analyzing twins with inconsistent timing of onset of TBI and dementia. +B. By excluding early life exposures from the study. +C. By observing the progression of dementia in twins without TBI. +D. By ensuring differences in dementia risk are due to TBI. +E. By focusing only on Alzheimer's dementia outcomes. + +![](images/a7cbfefefd9b26db46aef3b8c66b6d768940763b9bd96ea933ae7d4c406b7669.jpg) +Figure 10. Example of Academic part of the RivaBench. + +Speech and audio:...we observed statistically significant improvement in 6 out of the 7 key competencies we evaluated them on. So the use of robot AR improved the following key competencies, understanding of voltage and current, connections in series and parallel, circuitry, use of breadboard, measurements, and finally having a working circuit. Also, in the Zoom condition, only 3 out of the 12 students managed to complete all the exercises available. Meanwhile, 7 out of the 12 students from the robot AR condition managed to... + +Question: For a class of 24, how many students achieve full competency in each condition, and what's the improvement ratio of RobotAR over Zoom? + +Answer: RobotAR: 7, Zoom: 3, Improvement: 4/3 + +Explain: To calculate the improvement ratio, we need to compare the proportion of students who reach full competence under both conditions. Specific calculations are as follows:\nProportion of fully capable students under Zoom conditions:\n + +# Choices + +A. RobotAR: 14, Zoom: 6, Improvement: 7/6. +B. RobotAR: 12, Zoom: 4, Improvement: 3/2. +C. RobotAR: 16, Zoom: 8, Improvement: 2/1. +D. RobotAR: 10, Zoom: 5, Improvement: 2/1. +E. RobotAR: 7, Zoom: 3, Improvement: 4/3 + +# D. Synthetic Video Detection Data Examples + +Two synthetic video examples in the SynthDec partition of RivaBench are shown in Fig. 11 and 12 respectively. + +![](images/229b0c22e63825415308e924d2a0432935cf902dfd64d2b2cc090958e6a11d71.jpg) +Figure 11. Example video clip of the SynthDec part of RivaBench. + +![](images/d67bff7a6eeca614edae3ecb4461029afe0bfb7338479fa38490271134f745d7.jpg) + +![](images/cee16ce0a1da72485d296b4e85150766c184f1654a69493278426aebbb6a88bf.jpg) + +![](images/8badbf395a1377cb282b91b7f1ab6116f1c84da5f497920afdbbeef6f143d68a.jpg) +Figure 12. Example video clip of the SynthDec part of RivaBench. + +![](images/5ccf51603dc4c28ad5920f6c1357cd23f842d7cb0290bc8399eba6c2940fae84.jpg) + +![](images/81eecadccd113c53e2ac276471ffb77446e9779760ad5bcb63d6c7935ffc23b0.jpg) + +# E. Prompt Templates + +Prompt templates for video-SALMONN-o1 are shown in Table 5. When trained with the reasoning prompt template, the same template is used during inference. Changing templates may cause small perturbations in performance. + +Table 5. Prompt used for different types of tasks. + +
TypePrompt content
Direct answerSelect the best answer to the following question based on the video. Re-spond with only the letter of the correct option. +{Question} +Choose from: A. {Option A}, B, {Option B}…
ReasoningQuestion: +{Question} +Choose from: A. {Option A}, B, {Option B}…Answer the question step by step. Output each thinking step. Mark the end of each step with <end_of_step> token.
SynthDecAn AI-generated video contains unnatural distorted things, such as distorted hands or faces. Is the given video AI generated? Answer YES or NO. Answer step by step and output each step clearly.
+ +# F. Case Studies: Solution with Reasoning Examples + +![](images/f71f8f577bd82d8b189384c481e064132caef87b97eea39d043541de7d1b54a2.jpg) +Figure 13. Example video and solutions from the StandUp test set. + +Question: What does the speaker imply by saying he 'didn't need to know that' at the end of the video? \nChoose from:\nA. He regrets ordering so much food.\nB. When you want to eat, just eat and don't prioritize calories.\nC. He doesn't like knowing calorie counts of food.\nD. He feels guilty about his food choices.\nE. The calorie count was surprisingly low.\nAnswer the question step by step. Output each thinking step. + +# video-SALMONN-o1 SFT: + +1. The speaker is making a joke about how fast food restaurants started putting calorie counts on their menus. +2. He argues that this is a bad thing because it makes people feel guilty about eating large portions of unhealthy food. +3. He then says, 'I didn't need to know that,' implying that he doesn't like being reminded of the calorie count and feels like it's an unnecessary piece of information. +4. Therefore, the answer is \*\*C. He doesn't like knowing calorie counts of food\*. (Misinterpretation) + +# video-SALMONN-ol Process DPO: + +1. The The speaker is making a joke about how fast food restaurants use calorie counts to manipulate customers into buying more food. +2. He says he didn't need to know that because he already knew he was going to eat a lot of calories. +3. His tone and delivery suggest he's not actually upset about the calorie count, but rather finds it humorous that he doesn't need to be told how unhealthy his choices are. (Combining audio-visual information) +4. Therefore, the speaker implies that when you want to eat, just eat and don't prioritize calories. +5. The answer is B + +![](images/40b59bd5032d8289f1c901b35dc8e0bfe3919ff56d0b20ed55823e4c505e2514.jpg) + +Question: What is on the legend when he is hit by the turret? Choose from:\nA. Sand.\nB. Ice.\nC. Thunder.\nD. Fire.\nAnswer the question step by step. + +video-SALMONN-o1 SFT: + +1. The question asks about the legend which he is hit by the turrets. + +video-SALMONN-ol Process DPO: + +t the t + +![](images/9ff01db587b24470e91c59e436dbe5cfe3506333870a94c82d1b09e3549da06d.jpg) +Figure 15. Example video and solutions from videoMME test set. + +# G. Case Studies: Zero-shot Synthetic Video Detection + +![](images/1d196a28cd39f671af01399218f2ebf399bfc95ae47726ec66fa7bf670f2dcfa.jpg) +Figure 16. Example output from video-SALMONN-o1, GPT-4o and Gemini-1.5-pro for synthetic video detection. + +Question: An AI-generated video contains unnatural distorted things, such as distorted hands or faces. Is the given video AI generated? Answer YES or NO. Answer step by step and output each step clearly. + +LLaVA-OneVision: No. The video is real. + +# video-SALMONN-o1: + +The video shows a man running towards a train, then collapsing in pain. + +The background is blurred and the colors are distorted, suggesting motion blur. + +AI-generated videos often use techniques like motion blur to create a sense of speed or urgency. + +This can sometimes result in unnatural distortions in the image. + +In this case, the blurring and distortion are more pronounced than what would be expected from natural camera shake or movement. + +The hands of the man appear particularly distorted, which is not typical for real-life footage. + +Therefore, based on the visual cues of the video, it is likely that the video is AI generated. + +So the answer is YES. + +# GPT-40: + +Let's analyze the video step by step: + +**Clothing and Movement**: The person in the video is wearing a coat and is running. The movement appears natural, and the coat flows as expected with the motion. There are no visible distortions in the clothing or the movement of the person. + +**Background and Environment**: The background shows a train station with people and a train. The environment looks consistent and realistic, with no obvious distortions or anomalies in the structure of the train, platform, or other people. + +**Hands and Body Parts**: The hands and other visible body parts of the person appear normal and proportionate. There are no signs of unnatural distortions, such as extra fingers or oddly shaped limbs, which are common indicators of AI-generated content. + +**Color and Lighting**: The color and lighting in the video appear consistent with a real-world setting. There are no unusual color shifts or lighting inconsistencies that would suggest AI generation. + +**Conclusion**: NO, the video is not AI-generated.. + +# Gemini-1.5-pro: + +Here's how to figure out if the video is AI-generated: + +**Examine the man's face while he's running:** Does it look unusually smooth, blurry, or distorted in any way? + +**Pay close attention to the fall:** Do the movements look natural, or are they jerky and unrealistic?** Based on these observations, the answer is likely NO.** + +The video doesn't exhibit the classic signs of AI-generated content. The man's face, while a bit obscured by the speed, doesn't show the telltale signs of AI distortion. Additionally, the fall, though quick, appears to have natural movement. + +The final answer is: NO + +![](images/51913b1915e653980c7e03a3905316290ed0bb04e09cf45b7059081e39489d51.jpg) + +![](images/624f38580bc97a28eea86778e3010774a309660bad94d9e8fca08e1934e7f82c.jpg) +Figure 17. Example output from video-SALMONN-o1, GPT-4o and Gemini-1.5-pro for synthetic video detection. + +Question: An AI-generated video contains unnatural distorted things, such as distorted hands or faces. Is the given video AI generated? Answer YES or NO. Answer step by step and output each step clearly. + +LLaVA-OneVision: NO. + +# video-SALMONN-ol: + +The video shows a cyclist riding on a road with a field of flowers in the background. + +The colors are vibrant and the scene is smooth, suggesting it might be AI generated. + +Upon closer inspection, we can see that the cyclist's shadow is perfectly aligned with the cyclist, which is unusual for a moving object. This suggests a lack of natural motion blur or distortion, a common characteristic of AI-generated videos. + +Therefore, based on the smoothness of the scene and the unnatural alignment of the cyclist's shadow, it is highly likely that the video is AI generated. + +So the answer is YES. + +# GPT-40: + +Here are the steps to follow: + +**Subject Examination**: The cyclist appears to have a natural posture and movement. There are no visible distortions in the limbs or face that suggest AI generation +**Background Analysis**: The landscape, including the road, fields, and sky, appears consistent and natural. There are no obvious texture or perspective issues. +**Motion Artifacts**: The motion of the cyclist and the background blur due to speed appear natural and consistent with real-life video capture. +**Color Anomalies**: The colors in the video, including the yellow fields and blue sky, appear natural and consistent without any abrupt or unnatural transitions +**Overall Realism**: The video maintains a high level of realism throughout, with no elements that seem artificial or out of place. + +# Gemini-1.5-pro: + +**Examine the cyclist's form:** Does the body shape look natural throughout the pedaling motion? Are there any points where the limbs look distorted or bend in odd ways? + +**Check the bike:** Does the bike's movement seem realistic? Do the wheels spin smoothly? + +Look for unusual artifacts: AI-generated content can sometimes have strange textures or patterns, particularly in areas of motion blur. + +**Without the actual video, it's impossible to give a definitive YES or NO answer.** + +**However**, if you observed any of the issues mentioned above while watching the video, it would be a strong indicator of AI generation. + +You haven't provided me with the analysis of the video, so I can't give you a final answer of YES or NO. + +# H. Examples of Contrastive Step Selection Process + +![](images/54a06884c54cdf961719a11d456649a85ded22c7627e0dc93ff8633e869f23a2.jpg) +Question: What does the girl do after the man turned to walk away at the end? A. went to the kitchen\nB. check her arm\nC. grab girl's hands to dance\nD. thumbs up\nE. walk to man +Answer: E. Walk to man + +![](images/72470c0e839d6f85e3eb989c6defae24d5d5a3abfafd2eac90c54f58ca229fde.jpg) + +![](images/916fce036005f8567184ef10f0eafac2230bd61543318ba8aedcb51d34f88f68.jpg) +Figure 18. Example of the contrastive step selection process where two sampled paths are shown and the scores $d_{s_k}$ are given for each reasoning steps. The 3rd step in the first solution is wrong due to visual hallucination, and as a result, a very high score is assigned to that step and that step will be used to perform rollout. + +# I. Additional Results on VideoHallucer + +
ModelObject relationTemperalSemantic detailFactualNon-factualOverall
Gemini-1.5-pro52.018.553.516.548.537.8
video-SALMONN-o163.456.416.043.055.646.2
+ +Table 6. video-SALMONN-o1 performance compared against Gemini-1.5-pro on VideoHallucer (Wang et al., 2024c) and report the overall accuracy (when the entire pair is correct) for each category. \ No newline at end of file diff --git a/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/images.zip b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0169ff1c1fee25a4763f36c6c92c1292e85c2cc0 --- /dev/null +++ b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d552c4bcb4c84dc94e01d48e52bac79507007898ee75d397429dfcf9b633519b +size 1521032 diff --git a/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/layout.json b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b6cb0a18c2c4fd1df4b810ab2c3a84729567476c --- /dev/null +++ b/ICML/2025/video-SALMONN-o1_ Reasoning-enhanced Audio-visual Large Language Model/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db4c2685718963280ab4f6c0ab1dac8f250334a467576e979e8eb2ca9a0692f6 +size 672423 diff --git a/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_content_list.json b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..23463ca7747e954d4034a59ba7984abbd511cce3 --- /dev/null +++ b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e70d3f312434584ac2493212b3d471e227ae351078d04ffde444f99fc9f16cf1 +size 142583 diff --git a/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_model.json b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..370b6281b3326ea72f0780012fdc4d0542aee2cc --- /dev/null +++ b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:371891d212336a361e47815bedb80de8334f6909af44127187a570ddfd04bfe0 +size 178099 diff --git a/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_origin.pdf b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..99fa916179c205bd489ef0161928e590c473890d --- /dev/null +++ b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/ef6c2ec0-2c1c-42ea-a53a-74f20c4b53c1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53ddaf738444a627009274df64ad0252600580fc3a93f955998046d60702a5e1 +size 1175373 diff --git a/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/full.md b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/full.md new file mode 100644 index 0000000000000000000000000000000000000000..772664817b3548fff9b02ab97d46f4068f286053 --- /dev/null +++ b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/full.md @@ -0,0 +1,607 @@ +# Maximilian Beck $^{*12}$ Korbinian Poppel $^{*12}$ Phillip Lippe $^{*13}$ Richard Kurle $^{1}$ Patrick M. Blies $^{1}$ Günter Klambauer $^{12}$ Sebastian Böck $^{1}$ Sepp Hochreiter $^{12}$ + +# Abstract + +Recent breakthroughs in solving reasoning, math and coding problems with Large Language Models (LLMs) have been enabled by investing substantial computation budgets at inference time. Therefore, inference speed is one of the most critical properties of LLM architectures, and there is a growing need for LLMs that are efficient and fast at inference. Recently, LLMs built on the xLSTM architecture have emerged as a powerful alternative to Transformers, offering linear compute scaling with sequence length and constant memory usage, both highly desirable properties for efficient inference. However, such xLSTM-based LLMs have yet to be scaled to larger models and assessed and compared with respect to inference speed and efficiency. In this work, we introduce xLSTM 7B, a 7-billion-parameter LLM that combines xLSTM's architectural benefits with targeted optimizations for fast and efficient inference. Our experiments demonstrate that xLSTM 7B achieves performance on downstream tasks comparable to other similar-sized LLMs, while providing significantly faster inference speeds and greater efficiency compared to Llama- and Mamba-based LLMs. These results establish xLSTM 7B as the fastest and most efficient 7B LLM, offering a solution for tasks that require large amounts of test-time computation. Our work highlights xLSTM's potential as a foundational architecture for methods building on heavy use of LLM inference. Our model weights, model code and training code are open-source. + +Model: https://huggingface.co/NX-AI/xLSTM-7b + +Code: https://github.com/NX-AI/xlstm and + +https://github.com/NX-AI/xlstm-jax. + +$^{*}$ Equal contribution $^{1}$ NXAI GmbH, Linz, Austria $^{2}$ Johannes Kepler University, Linz, Austria $^{3}$ Now at Google Deepmind. Correspondence to: Maximilian Beck , Korbinian Poppel , Sebastian Bock . + +Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). + +# 1. Introduction + +Recent breakthroughs in test-time compute scaling have unlocked significant improvements in solving complex reasoning and math problems. By sampling multiple promising solutions, the best answers can be provided to the user or used as training targets (Yao et al., 2023; Hao et al., 2023; Guan et al., 2025). However, as state-of-the-art models such as OpenAI o1 $^{1}$ and DeepSeek-R1 (DeepSeek-AI et al., 2025) leverage these methods to push the capabilities of language models to new heights, the significantly increased computational overhead of test-time compute methods requires more efficient architectures that provide greater inference speeds. A promising path involves linear recurrent neural networks with gating mechanisms, including GLA (Yang et al., 2024b), Mamba (Gu & Dao, 2024; Dao & Gu, 2024), RWKV (Peng et al., 2023; 2024), RetNet (Sun et al., 2023), and xLSTM (Beck et al., 2024). Compared to Transformers, these models offer a parallel mode for efficient training (e.g. Yang et al., 2024b) and a recurrent generation mode that both scale linearly with context length. The increased compute efficiency combined with constant memory usage during inference allows spending more compute at test-time, but also enables running models locally on edge devices acting as an interface to the user with fast response times. + +xLSTM has shown competitive performance compared to alternative recurrent models and even Transformers in a controlled experimental setting using the same data and similar parameter counts (Beck et al., 2024). Moreover, this architecture also excelled in other domains, such as computer vision (Alkin et al., 2025), robotics (Schmied et al., 2024), molecular biology (Schmidinger et al., 2025), and time series (Kraus et al., 2024). However, so far, xLSTM has not been scaled to datasets beyond 300B tokens and 1.3B parameters. It therefore remains uncertain whether this architecture can match the Transformer's ability to scale effectively with larger model sizes and extract meaningful patterns from ever-larger datasets. + +In this work, we scale the xLSTM to 7B parameters and present our xLSTM 7B, a large language model trained on 2.3T tokens from the DCLM dataset (Li et al., 2024) with + +context length 8192 using 128 H100 GPUs. To achieve this, we improve and optimize the initial xLSTM architecture from Beck et al. (2024) for optimal training efficiency and stability, without sacrificing performance in downstream tasks. Our new architecture fully relies on mLSTM cells with parallel training mode to achieve maximum speed at high language modeling performance. We further optimize the throughput by modifying the surrounding block architecture. By operating the mLSTM in a lower dimensional space and adding position-wise feedforward MLP layers similar to the default Transformer blocks, we increase the amount of compute spent for highly optimized linear layers. Additionally, we discard components such as channel-wise convolutions or learnable skip connections to increase the GPU utilization during training. We find that this optimized block architecture has a $2 \times$ to $4 \times$ higher token throughput compared to the previous xLSTM architecture of Beck et al. (2024), while achieving similar performance on language modeling. In addition to the efficiency optimizations, we optimize the new xLSTM architecture for improved training stability, focusing specifically on the gating mechanism of the mLSTM cell. By introducing soft-capping for input and forget gates and improved initializations for the input gate we effectively mitigate high gradient norm spikes and variance, and improve the performance of our xLSTM 7B. + +In our evaluations on language downstream and long-context tasks, xLSTM 7B shows comparable performance to Transformers and Mamba models of the same size, but with our optimized block architecture it achieves the highest prefetch and generation throughput with the lowest GPU memory footprint on our inference efficiency benchmarks. + +To summarize, in this work we present targeted modifications to the xLSTM architecture in order to (i) improve training and inference efficiency, and (ii) ensure training stability at large scales. (iii) We introduce a new language model with 7B parameters based on the xLSTM architecture trained on $2.3\mathrm{T}$ tokens with 8k context length demonstrating the highest inference speed and efficiency in our benchmarks. + +We release the pre-trained model xLSTM7B on Hugging-face $^2$ and provide the model implementation and training code $^3$ including optimized triton kernels $^4$ for fast training and inference. + +# 2. Background: xLSTM with Matrix Memory + +In this section, we reassess the mLSTM (Beck et al., 2024), on which we build our xLSTM 7B. The mLSTM cell is fully parallelizable, and, therefore, enables highly efficient + +large-scale model training while maintaining fast recurrent inference with constant memory. + +Generation Mode. During inference, when generating tokens, the mLSTM cell processes the series of input vectors $\boldsymbol{x}_t \in \mathbb{R}^d$ for time steps $t \in \{1, \dots, T\}$ in a recurrent manner, mapping a state $(\boldsymbol{h}_{t-1}, \boldsymbol{C}_{t-1}, \boldsymbol{n}_{t-1}, m_{t-1})$ to a successor state $(\boldsymbol{h}_t, \boldsymbol{C}_t, \boldsymbol{n}_t, m_t)$ given an input $\boldsymbol{x}_t$ . Here, $\boldsymbol{h}_t \in \mathbb{R}^{d_{hv}}$ denotes the hidden state, $\boldsymbol{C}_t \in \mathbb{R}^{d_{qk} \times d_{hv}}$ denotes the cell state responsible for long-term memory, $\boldsymbol{n}_t \in \mathbb{R}^{d_{qk}}$ denotes the normalizer state, and $m_t \in \mathbb{R}$ denotes the max state controlling the magnitude of the exponential input gate. + +In the recurrent mode (generation), the mLSTM cell + +$$ +\boldsymbol {h} _ {t} = \text {m L S T M C e l l} \left(\boldsymbol {x} _ {t}, \boldsymbol {h} _ {t - 1}, \boldsymbol {C} _ {t - 1}, \boldsymbol {n} _ {t - 1}, m _ {t - 1}\right), \tag {1} +$$ + +is defined by the following state update equations: + +$$ +m _ {t} = \max \left\{\log \sigma \left(\tilde {\mathrm {f}} _ {t}\right) + m _ {t - 1}, \tilde {\mathrm {i}} _ {t} \right\}, \tag {2} +$$ + +$$ +\boldsymbol {C} _ {t} = \mathrm {f} _ {t} \boldsymbol {C} _ {t - 1} + \mathrm {i} _ {t} \boldsymbol {k} _ {t} \boldsymbol {v} _ {t} ^ {\top}, \tag {3} +$$ + +$$ +\boldsymbol {n} _ {t} = \mathrm {f} _ {t} \boldsymbol {n} _ {t - 1} + \mathrm {i} _ {t} \boldsymbol {k} _ {t}, \tag {4} +$$ + +$$ +\widetilde {\boldsymbol {h}} _ {t} = \frac {\boldsymbol {C} _ {t} ^ {\top} \left(\boldsymbol {q} _ {t} / \sqrt {d _ {q k}}\right)}{\max \left\{\left| \boldsymbol {n} _ {t} ^ {\top} \left(\boldsymbol {q} _ {t} / \sqrt {d _ {q k}}\right) \right| , \exp (- m _ {t}) \right\}}, \tag {5} +$$ + +$$ +\boldsymbol {h} _ {t} = \mathbf {o} _ {t} \odot \operatorname {N o r m} \left(\widetilde {\boldsymbol {h}} _ {t}\right). \tag {6} +$$ + +The gate activations are computed as: + +$$ +\mathrm {f} _ {t} = \exp \left(\log \sigma \left(\tilde {\mathrm {f}} _ {t}\right) + m _ {t - 1} - m _ {t}\right), \tag {7} +$$ + +$$ +\mathrm {i} _ {t} = \exp \left(\tilde {\mathrm {i}} _ {t} - m _ {t}\right), \tag {8} +$$ + +$$ +\mathbf {o} _ {t} = \sigma (\tilde {\mathbf {o}} _ {t}). \tag {9} +$$ + +The query, key, and value vectors $\pmb{q}_t,\pmb{k}_t\in \mathbb{R}^{d_{qk}}$ $\pmb {v}_t\in \mathbb{R}^{d_{h\nu}}$ are computed as $\{\pmb {q}_t,\pmb {k}_t,\pmb {v}_t\} = \pmb {W}_{\{q,k,v\}}\pmb {x}_t + \pmb {b}_{\{q,k,v\}}$ The scalar input and forget gates $\mathrm{i}_t,\mathrm{f}_t\in \mathbb{R}$ are computed from the pre-activations $\{\tilde{\mathrm{i}}_t,\tilde{\mathrm{f}}_t\} = \boldsymbol{w}_{\{\mathrm{i},\mathrm{f}\}}^\top \boldsymbol{x}_t + b_{\{\mathrm{i},\mathrm{f}\}}$ and the vector output gate $\mathbf{o}_t\in \mathbb{R}^{d_{h\nu}}$ is computed from the pre-activation $\tilde{\mathbf{o}}_t = W_{\mathbf{o}}\pmb {x}_t + \pmb {b}_{\mathbf{o}}$ with the sigmoid function $\sigma$ . The normalization layer Norm in (6) can be either RMSNorm (Zhang & Sennrich, 2019) or LayerNorm (Ba et al., 2016). + +Training Mode. In training, the mLSTM cell processes a full sequence of input vectors $\mathbf{X} \in \mathbb{R}^{T \times d}$ and computes the hidden states $\mathbf{H} \in \mathbb{R}^{T \times d_{h\nu}}$ for all time steps $T$ in parallel. We denote the mLSTM cell in parallel mode (training) as + +$$ +\boldsymbol {H} = \mathrm {m L S T M C e l l} (\boldsymbol {X}). \tag {10} +$$ + +Due to the linear nature of the recurrence in equations (2)-(9), the hidden states $\pmb{H}$ can be computed in chunks without materializing the intermediate memory states $(C_t, n_t, m_t)$ . + +![](images/05fc94d8025223fe4138b89152fdc3765b832079a668655af87b0c76f1f66397.jpg) +Figure 1. Sketch of the updated xLSTM Block. The lower part is an output-gated sequence-mix layer with the mLSTM at its core, whereas the upper part is a gated MLP (SwiGLU) as a feature/channel-mix layer. See Fig. 8 for details. + +This chunkwise-parallel form enables highly efficient training kernels, analogous to FlashLinearAttention (Yang et al., 2024b; Yang & Zhang, 2024), surpassing the training speeds of FlashAttention (Dao, 2024; Shah et al., 2024). For details on the chunkwise-parallel training kernels for the mLSTM cell, we refer to Beck et al. (2025). + +Multi-Head mLSTM. Similar to multi-head attention in Transformers (Vaswani et al., 2017), the xLSTM has $N_{\mathrm{head}} = d / d_{h\nu}$ different mLSTM cells mLSTMCell(i). The hidden states $H^{(i)}$ of every head are then concatenated and once again projected, resulting in the mLSTM layer + +$$ +\operatorname {m L S T M} (\boldsymbol {X}) = \operatorname {C o n c a t} \left(\boldsymbol {H} ^ {(1)}, \dots , \boldsymbol {H} ^ {\left(N _ {\text {h e a d}}\right)}\right) \boldsymbol {W} _ {\text {p r o j}} ^ {\top}, \tag {11} +$$ + +where $H^{(i)} = \mathrm{mLSTMCell}^{(i)}(\mathbf{X})$ . We discuss key considerations for choosing the number of parallel heads or in other words the head dimension $d_{h\nu}$ in Sec. 3.1. + +# 3. Optimized xLSTM 7B Architecture + +The emerging paradigm of increasing test-time computation necessitates i) the development of novel architectures optimized for efficient inference. Additionally, new architectures must ii) be viable in large-scale pre-training setups, thus be highly efficient during training, and iii) exhibit stable convergence. Our xLSTM 7B is designed to meet these three challenges by offering an architecture that can be trained efficiently and with stable convergence and is also highly efficient at inference. In Sec. 3.1, we detail our optimization of the xLSTM architecture for efficiency during both inference and training. We then describe in Sec. 3.2 our actions to improve and ensure stable convergence for training large xLSTM models, focusing specifically on the gating mechanism of the mLSTM cell. + +# 3.1. Optimizing for Efficiency + +The core of the xLSTM 7B architecture, the mLSTM cell, with its recurrent and parallel mode enable efficient inference and training. To leverage its full potential, we revisit the design of the surrounding block structures. + +Previous mLSTM Block. Similarly to other linear RNNs like Mamba (Gu & Dao, 2024; Hua et al., 2022), the previous xLSTM architecture places the mLSTM cell combined with channel-wise convolutions in between a linear up-projection and down-projection, which is referred to as pre up-projection block (Beck et al., 2024). These blocks combine sequence mixing and channel mixing in one block and are therefore stacked homogeneously without interleaving position-wise feed-forward MLP layers. Although the pre up-projection block architecture has proven competitive language modeling performance for the xLSTM up to 1.4B parameters, it comes with a substantial trade-off in computational efficiency for the following reasons: + +1. Within the pre up-projection block, the mLSTM operates in a significantly higher dimension than the embedding dimension of the model. This leads to a substantially higher computational cost and GPU memory usage for the mLSTM operation. +2. Omitting position-wise feed-forward MLP layers results in a decreased proportion of highly efficient linear layer FLOPs in the model. +3. The previous xLSTM architecture uses several additional components such as learnable skip connections, channel-wise convolutions, and small (block-diagonal) projection layers to compute queries, keys and values. Without custom kernel fusion, these small operations result in multiple short kernel calls on the GPU, which cannot effectively utilize tensor cores5 and, consequently, significantly reduce GPU utilization. +4. Previously, the input and forget gate pre-activations were computed from concatenated query, key and value projections. In a large-scale tensor-parallel training setup this requires an additional all-reduce operation per mLSTM block, which increases the overall communication cost. + +These limitations prevent efficient scaling of the xLSTM architecture as introduced by Beck et al. (2024) beyond 1.4B parameters. To scale the xLSTM to even larger model sizes, we optimize the mLSTM block for maximal efficiency by addressing these four limitations. + +Optimizing the mLSTM Block. To begin, we operate the mLSTM cell in the models' embedding dimension, in- + +stead of a higher dimensional space and place position-wise feed-forward MLP layers after each mLSTM layer. This modification increases the proportion of highly optimized linear layer (i.e. matrix multiplication) FLOPs and reduces the computation cost of the mLSTM operation (see App. E for details on the FLOP computation). The significantly reduced GPU memory usage enables larger batch sizes during training, which also increases training efficiency. The result is the default dense Transformer block configuration referred to as post up-projection block by Beck et al. (2024): + +$$ +\boldsymbol {z} = \boldsymbol {x} + \mathrm {m L S T M} (\operatorname {N o r m} (\boldsymbol {x})), \tag {12a} +$$ + +$$ +\boldsymbol {y} = \boldsymbol {z} + \operatorname {M L P} (\operatorname {N o r m} (\boldsymbol {z})), \tag {12b} +$$ + +where $\pmb{x}$ is the input to the block, $\pmb{z}$ is the intermediate output of the mLSTM layer defined in (11), and $\pmb{y}$ is the block output. The MLP is a SwiGLU (Shazeer, 2020) (see Fig. 1). + +Moreover, we discard operations like the channel-wise convolution and the learnable skip-connection, and replace the block-wise query, key and value projections by dense linear layers. This again increases linear layer FLOPs and ensures effective usage of tensor cores within the mLSTM layer. + +Finally, we ensure that the gate pre-activations for every head are computed independently as outlined in (11). This allows us to apply the model parallelization strategies optimized for Transformers with self-attention (Shoeybi et al., 2020) to our xLSTM 7B architecture and therefore minimize additional communication cost. + +These optimizations result in our optimized mLSTM block described in Fig. 1 and Fig. 8 in the appendix, of which we stack 32 in our xLSTM 7B architecture. We observe that our optimizations achieve a $3.5 \times$ speedup in training for 1.4B models, with a slight trade-off in validation perplexity that can be mitigated by a few more training steps (see Tab. 2). Although the modified block structure reduces the size of the mLSTM cell memory states $C$ , we find that it does not compromise the language modeling quality of our model. + +Optimizing the Memory Capacity. The overall memory capacity of the xLSTM, i.e. the amount of information that can be stored from an input sequence, is related to the physical size of its memory cell states $C$ of shape $d_{qk} \times d_{hv}$ in GPU memory. By choosing either the number of heads or the head dimension $d_{hv}$ , the other is given by the relation to the embedding dimension $d = \# \text{heads} \times d_{hv}$ . For the xLSTM 7B we set $d_{qk} = d_{hv}/2$ similar to Sun et al. (2023). We can then compute the total memory state size by #blocks × #heads × $d_{qk} \times d_{hv} \times 4$ bytes, assuming that the state is stored in float32 format. In Tab. 3 we show the memory state size for different numbers of heads as well as their trade-offs with language modeling performance and training efficiency. We use a larger memory state size and + +a slightly longer train step time to make sure the model is not constrained by a lack of memory. We elaborate further on this in Sec. 5. We choose 8 heads with head dimension $d_{hv} = 512$ for xLSTM7B. + +Fused Generation Kernels for the mLSTM Cell. During autoregressive generation, the hidden state outputs of the mLSTM cell are computed, with its recurrent formulation given by (1) - (9). The recurrent formulation consists of a combination of an outer-product, dot-products and several pointwise operations, which translates to individual consecutive GPU kernels. Since each kernel loads its inputs from and stores its outputs to GPU memory, this increases the amount of slow memory operations. To ensure that intermediate results of equations (2)-(5) are not unnecessarily transferred to GPU memory, but instead remain on the GPU's compute chips, we write fused GPU kernels for the mLSTM generation mode. This results in significantly faster generation as shown in speed benchmarks in Sec. 5.2. + +# 3.2. Optimizing for Stability + +We find that the previous xLSTM architecture at the 7B parameter scale often becomes unstable in early stages of training. In particular, we noticed that training at higher learning rates leads to large spikes in the gradient magnitude and loss value, similar to reports from previous works on Mamba-based models (Lieber et al., 2024; Dao & Gu, 2024; Zuo et al., 2024). We further observed and attribute these spikes to very large outlier features, i.e. individual feature values that are significantly larger than the average feature value (He et al.). We address these stability issues by (i) the use of RMSNorm instead of LayerNorm, (ii) soft-capping of the input and forget gates, and (iii) a negative initialization of the input gate bias. + +Pre-Norm with RMSNorm. Many works report that replacing the LayerNorm by RMSNorm at the input of each layer (e.g. in the pre-norm setting (Xiong et al., 2020)) improves training stability for Transformers (OLMo et al., 2025; Touvron et al., 2023; Gemma Team, 2024a; Yang et al., 2024a) and Mamba models (Zuo et al., 2024). Our experiments in App. C.2, Fig. 9 confirm that this also applies to the pre-norm normalization layers in (12) in our xLSTM architecture. Therefore, we replace the LayerNorm by RMSNorm in our xLSTM architecture. + +Gate Soft-Capping. To reduce potential large outlier features and related loss spikes, we apply soft-capping to the input and forget gate pre-activations $\tilde{\mathrm{i}}_t$ and $\tilde{\mathrm{f}}_t$ , such that their values stay between $-a$ and $a$ for a specific cap value $a$ . We cap the gates using $a = 15$ with the function + +$$ +\operatorname {s o f t c a p} _ {a} (\boldsymbol {x}) = a \cdot \tanh (\boldsymbol {x} / a). \tag {13} +$$ + +In Sec. 5.3 and App. Sec. C.2, we confirm that this significantly improves the stability and performance of our + +![](images/c6f2ebb7563ea6ed8834f8c3309d17fac7a0acee1e09b608b814da46bfa8eedd.jpg) +Figure 2. Loss and Gradient Norm during Pretraining of xLSTM 7B. We show the mean and maximum value over 50 steps. Our enhanced architecture and initialization enable stable pretraining of xLSTM 7B, exhibiting only two brief loss spikes early in training, both of which were rapidly recovered. + +![](images/3e87f06dc5cdf78aace6b3a13072a32be6723d7c5816acba20a416f1e44a5c16.jpg) + +Table 1. Model Performance on Huggingface Leaderboard v2. ↑ indicates larger values are better. + +
MODELBBH ↑MMLU-PRO ↑MATH ↑MUSR ↑GPQA ↑IFEVAL ↑AVERAGE ↑
TRANSFORMERS
Llama-3.1-8B0.4650.3250.0420.3790.3120.1250.275
Llama-2-7B-hf0.3490.1860.0130.3630.2690.2640.241
OLMo-7B-hf0.3300.1180.0100.3570.2570.2800.225
Gemma-7B0.4260.2930.0610.4080.2950.2720.292
Ministral-8B-Instruct-24100.4960.3500.1510.4300.3190.3220.345
Bloom-7B10.3110.1110.0000.3540.2640.1380.196
Gpt-j-6B0.3210.1250.0090.3630.2610.2500.222
Pythia-6.9B0.3260.1160.0060.3550.2700.2320.217
Qwen2.5-7B0.5410.4350.1650.4460.3290.3590.379
Gemma-2-9B0.5430.4140.1170.4530.3340.2170.346
DCLM-7B0.4260.3120.0300.3920.3030.2280.282
TRANSFORMER-RECURRENT HYBRIDS
Zamba2-7B0.4890.3190.1140.4020.3180.3750.336
RECURRENT MODELS
Falcon-Mamba-7B (pre-decay)0.3730.1770.0240.3870.2750.2520.248
Falcon-Mamba-7B0.4290.2290.0390.4120.2990.3350.290
MambaCodestral-7B (v0.1)0.4050.1910.0230.3590.2660.3220.261
RKWV-v5-Eagle-7B0.3250.1210.0070.3220.2430.2660.214
RWKV-v6-Finch-7B0.3420.1540.0140.3380.2650.2640.230
xLSTM7B0.3810.2420.0360.3790.2800.2440.260
xLSTM7B LCTX0.3900.2520.0400.3740.2530.2340.257
+ +xLSTM architecture. Additionally, we apply soft-capping with $a = 30$ to the final layer logits, similar to Gemma Team (2024b). + +Negative Input Gate Bias Initialization. We observe that early on in training our xLSTM models experience large gradient norm spikes, which affect the final performance of our model (see Fig. 11 in App. C.2). Initializing the input gate at large negative values (e.g. -10) effectively mitigates these gradient norm spikes and improves performance. We analyze the impact of the input gate further in Sec. 5.3. + +In summary, our optimizations enable a remarkably stable pre-training of xLSTM 7B, as we show in Figure 2. + +We outline the detailed block architecture of our xLSTM 7B in Appendix A and our training recipe in Appendix B. + +# 4. Related Work + +Although the largest language models to date have predominantly relied on Transformer-based architectures, recurrent LLMs and hybrid models have recently gained traction as alternative architectures due to their enhanced efficiency in processing long contexts. Many recent efforts have targeted the 7B parameter scale (or nearby), striking a balance between model capacity and resource constraints. Griffin (De et al., 2024) is one of the first hybrid recurrent models that was trained with up to 14B parameters. Later, the same architecture was used to train RecurrentGemma with 9B parameters (Botev et al., 2024). The Griffin architecture uses a 1D temporal convolution of size 4 before the sequence mixing part, similar to H3 (Fu et al., 2023) and Mamba (Gu & Dao, 2024), but the hidden state is vector valued with independent updates per each (scalar) dimension. In contrast, Eagle-7B (Peng et al., 2024) builds on the RWKV architecture and uses a matrix-valued hidden state similar to linear attention and gated linear attention (Katharopoulos et al., 2020; Yang et al., 2024b). + +Among the Mamba models at the 7B parameter scale, Waleffe et al. (2024) provided the first comparative analysis of Mamba 1, Mamba 2, and a hybrid Mamba architecture. In their experiments, the performance of both Mamba 1 and Mamba 2 significantly lagged behind Transformers, while the hybrid architecture was shown to surpass the performance of Transformers. Aligned with this finding, several new hybrid Mamba architectures have been proposed, including Samba (3.8B) (Ren et al., 2024), Zamba (7B) (Glorioso et al., 2024), and the 12B parameter mixture-of-experts-model Jamba (Lieber et al., 2024). More recently, FalconMamba (Zuo et al., 2024) based on Mamba 1 and Codestral Mamba (Mistral AI Team, 2024) based on Mamba 2 have shown that a purely recurrent architecture is capable of exceeding the performance of both hybrid Mamba models and Transformers. + +# 5. Experiments + +# 5.1. Language Modeling Performance + +Huggingface Leaderboard. We start by benchmarking xLSTM 7B against state-of-the-art Transformer and recurrent LLMs on the 7B parameter scale. To this end, we evaluate the performance on the Open LLM Leaderboard v2 using the LM Evaluation Harness (Gao et al., 2024; Fourrier et al., 2024). The results are summarized in Tab. 1, showing that xLSTM 7B ranks in the mid-range among 7B-scale models, several of which benefited from substantially larger training datasets. We believe that with a larger and better curated training dataset, including a greater emphasis on math and code data in earlier training phases, xLSTM 7B could match the performance of the strongest 7B models. + +Long-Context Evaluation and Fine-Tuning. To evaluate long-context capabilities, we use the RULER benchmark (Hsieh et al., 2024), which consists of a set of synthetic needle-in-a-haystack, question-answering and variable tracking tasks, with varying context length from 4K to 131K tokens. For this benchmark, we consider both our standard xLSTM 7B and a long-context version (xLSTM 7B LCTX), where we replace the standard cool-down phase described in App. B with a long-context variant. For the long-context cool-down phase, we add long-context data (see App. Tab. 5) to the training corpus and train the model with a context length of 32K, while adjusting the batch size to maintain the number of tokens per batch. We compare to Llama 2 7B (not long-context fine-tuned) and Llama 3.1 8B (long-context fine-tuned up to 131K tokens) as Transformer baselines, CodestralMamba and FalconMamba as State Space Model baselines, and RWKV-5/6 as additional RNN baselines. + +The results on RULER are shown in Fig. 3. As expected, Llama 3 provides the strongest baseline, since it is heavily fine-tuned on very long contexts and with a more advanced and optimized approach (Grattafori et al., 2024). On the other hand, Llama 2 fails entirely for context lengths beyond $4\mathrm{k}$ , for which it has not been trained. For xLSTM 7B, the long-context cool-down stage in pre-training largely improves long-context capabilities, resulting in competitive performance compared to state-space models and outperforming RWKV-5/6. Notably, the long-context xLSTM 7B achieves $20\%$ average accuracy at a context length $131\mathrm{k}$ although it was trained only with a context length up to $32\mathrm{k}$ during the cool-down phase. This is particularly remarkable given that, unlike Transformers with a growing KV cache, xLSTM 7B must store information from the entire sequence in a fixed-size memory with limited capacity (see Tab. 3). We assume that xLSTM 7B's performance could be pushed further by explicitly training on even longer sequences and with a more advanced fine-tuning protocol as it was used in the training of Llama 3 (Grattafori et al., 2024). + +![](images/743017a4695f83f97380f066aaf427357e1a6368af63be4238b73f15730ccbf1.jpg) +Figure 3. RULER results of xLSTM 7B in comparison to Transformers (with and without long context finetuning) and State Space Models, with and without medium context cooldown. + +In Sec. 5.3, we further investigate the effect of the memory state size and the input gate on the long context capabilities of xLSTM 7B. + +# 5.2. Speed Benchmarks + +The constant memory size and linear compute scaling with context length of our xLSTM architecture enable highly efficient generative inference in large scale-inference serving environments as well as local inference running on edge devices. + +We focus on the local single user inference setting, which is common when models are deployed on edge devices. Therefore, we benchmark generative inference with our xLSTM 7B model on a single NVIDIA H100 GPU with batch size 1, unless specified otherwise. We compare our xLSTM 7B to Llama 2 and Llama 3 models as Transformer baselines and Falcon Mamba (Mamba 1 architecture) and Codestral Mamba (Mamba 2 architecture) as Mamba baselines. We use model implementations from Huggingface transformers library and optimize each with torch.compile6 and PyTorch CUDA Graphs (Nguyen et al., 2021). In Appendix D, we additionally compare our optimized Huggingface xLSTM 7B to Llama2, Llama3, Falcon Mamba and Codestral Mamba in the vLLM (Kwon et al., 2023) inference framework. + +Generation Throughput. The generation throughput measures the generation speed in tokens per second at varying prefetch lengths, i.e., varying length of documents the model gets to read before it starts to generate text. In Fig. 4, we observe that due to the quadratic scaling with input context length of the attention mechanism, the speed at which + +![](images/8a6f3f05a92c89c60fdbec4abf5d22cbe2c3d9cbe3d27d4e576e5c1f54a1180c.jpg) +Figure 4. Throughput for generating 100 tokens with batch size 1 at varying prefetch lengths. + +the Transformer models can generate text significantly drops for longer prefill lengths. In contrast, recurrent architectures with constant cost per generated token have a constant generation speed independent of the input context length. + +We find that xLSTM 7B is about $50\%$ faster in text generation than Mamba, which we attribute mostly to our optimized block design (see Sec. 3), and even faster than Llama-based Transformer models with a similar block design at prefill length 0. + +Generation Time and Memory Consumption. We measure the token generation time and GPU memory usage (without pre-fill) for different generation lengths. Fig. 5 (left) demonstrates the linear scaling of recurrent models vs. the quadratic scaling of Transformers in compute (runtime), while Fig. 5 (right) shows the constant memory size of recurrent models compared to the linear growth of the Transformer KV-cache. Since Llama 3 uses grouped query attention (Ainslie et al., 2023) the memory usage grows slower compared to Llama 2, which uses default multi-head attention. + +With our optimized block design, we operate the mLSTM in a lower dimensional space. This results in a significantly lower memory footprint (Fig. 5 (right)) and lower generation times (Fig. 5 (left)) of our xLSTM 7B model compared to the Mamba models. + +Time To First Token. In applications, where the language model operates as interface to the user (potentially on edge devices), it is important to have short response times. In Fig. 6, we measure this response time or latency as the time the model takes to generate 1 or 100 token after consuming varying prefetch lengths. Our xLSTM 7B achieves the fastest response times for all prefetch lengths. + +![](images/317f8770397ef16b33657176d75944884e229097bc72685880b575555c580671.jpg) +Figure 5. Time and GPU memory used for generation of a single sequence of varying lengths for generation without prefetch. + +![](images/f4e2d0f414712106bbdf3d0bdc964be6af49cc6048e6674eb2895b75e16bf596.jpg) +Figure 6. Time to first (1) token and time to first 100 tokens at varying prefetch lengths for batch size 1. +Figure 7. Prefill throughput varying batch size and context length. + +Prefill Throughput. Finally, we measure the prefetch throughput in tokens per second for 65,536 tokens at varying batch size and context length. Due to the quadratic scaling with context length, the throughput of the Llama models decreases with longer contexts. In contrast, our xLSTM 7B achieves the highest throughput (about $70\%$ higher than Codestral Mamba) independent of the context length. + +# 5.3. Ablation Studies + +Finally, we validate our design choices to optimize the training stability and efficiency of our xLSTM 7B architecture. + +Pre-Up vs. Post-Up Projection Block. We compare the pre-up projection block architecture against our optimized mLSTM block in terms of validation perplexity and training step time for three model sizes. For both block architectures, we apply gate soft-capping and the input gate bias initialization described in Sec. 3. The results in Tab. 2 show only a slight performance difference in terms of validation perplexity at the largest model size. However, the $3.5 \times$ speedup in training step time confirms our choice for the + +![](images/b0b7ce4872708088c934ca571c0ab3dd4a420c3325924aa5ee1dd1b701f761da.jpg) + +post-up projection block in xLSTM 7B, deviating from the pre-up projection of Mamba (Gu & Dao, 2024; Dao & Gu, 2024) and the previous xLSTM architecture (Beck et al., 2024). + +Memory State Size. The memory state size as well as the training step time is directly influenced by the number of heads (see Sec. 3.1 and Tab. 3). In this experiment we investigate how the memory state size affects the performance of the xLSTM in validation perplexity, on downstream tasks as well as on long context tasks. To do so, we train xLSTM models with 7B parameters and different number of heads on 160B tokens of our pre-training dataset. In our evaluations in perplexity (Tab. 3) and on downstream tasks (Tab. 7 and 8), we find that the performance remains stable across different the number of heads, i.e., memory state sizes, with a slight improvement for more heads (e.g. 16). In contrast, our long context evaluation in Fig. 13 suggests that at very long contexts 4 and 8 heads (i.e., larger memory states) seem to perform better. While this is in line with our intuition that larger memory state size corresponds to better long-context capabilities, we believe that an even larger study (e.g., training on more tokens) than our ablation at 7B parameters and 160B tokens would be necessary to fully explore this connection. + +Norm Layer Types. Our update on the xLSTM block architecture has two normalization layers, a pre-norm at the block entry and a head-wise norm layer after the mLSTM cell. In this ablation, we test the effect of the types of these normalization layers on training stability and performance, with LayerNorm (Ba et al., 2016) and RMSNorm (Zhang & Senrich, 2019) as the options. In Fig. 9 in App. C.2 we confirm that, for the pre-norm the RMSNorm type has a strong stabilizing effect, whereas for the mLSTM cell state norm there is no impact on stability and performance. + +Table 2. Comparison between the previous xLSTM architecture (Beck et al., 2024) and our xLSTM 7B architecture in terms of step time and perplexity for different number of parameters. Models of size 160M and 400M use batch size 128 distributed over 16 GPUs, and 1.4B parameter models use batch size 256 (32 GPUs). For the 7B parameter model, our new architecture uses batch size 512 (128 GPUs), whereas the previous architecture uses only batch size 256 (128 GPUs) because of the architecture's increased GPU memory requirements. Due to the expensive computational costs, we only compute the token throughput and did not fully train the 7B parameter models for this ablation. + +$\uparrow/\downarrow$ indicates larger / smaller values are better. + +
MODELTHROUGHPUT ↑ 1K TOKENS/SECSPEEDUP ↑PPL ↓Δ PPL
160MPREVIOUS76.2020.43
OURS225.99×2.9721.34+0.91
400MPREVIOUS28.1315.26
OURS102.40×3.6415.74+0.48
1.4BPREVIOUS10.5712.46
OURS37.03×3.5012.68+0.22
7BPREVIOUS3.46-
OURS9.15×2.64-
+ +Table 3. Head dimension ablation for a 7B parameter xLSTM model with 32 blocks, embedding dimension 4096 and training context length 8192. KV Cache in Tokens shows how many tokens in a similar sized Transformer correspond to our state size. FLOPs forward are the mLSTM cell forward FLOPs for a full sequence. $\downarrow$ indicates smaller values are better. + +
#HeadsdhvTotal Memory State in MBKV Cache in TokensFLOPs forward ↓Val PPL ↓Train Step Time in s ↓
41024268.42567.6e119.583.97
8512134.21284.1e109.523.63
1625667.1642.4e109.523.51
3212833.6321.5e109.553.41
+ +Soft-capping. Soft-capping (Eq. (13)) of the output logits and the input and forget gate pre-activations, is important for training stability. In Fig. 10 of the appendix, we visualize the validation loss and gradient norms during training on 160B tokens with and without soft-capping. The run without soft-capping shows a higher variance in the gradient norms and an overall worse validation loss. + +Input Gate. We initialize the input gate with larger negative values (e.g. -10) to mitigate large gradient norm spikes and variance (see Sec. 3.2). This suggests that the input gate is important for the performance of the xLSTM architecture. Therefore, in App. C.2 we test the effect of having the input gate non-trainable. We compare a version with fixed input gate at one (i.e. setting weights and biases to zero) with a version, where the input gate bias is fixed at our low default initialization value of -10. We find that, while the learnable input gate only slightly improves performance of our xLSTM over the fixed input gate versions on our standard downstream tasks (App. C.2, Tab. 7 and 8), it significantly improves performance on long-context evaluations (App. C.2, Fig. 13). + +# 6. Conclusion + +In this work, we demonstrate how our targeted modifications enable the xLSTM architecture to scale to models with 7B parameters, trained on $2.3\mathrm{T}$ tokens. By switching to a post-up-projection structure, gate soft-capping and proper initialization, we largely improve training stability and token throughput, making the xLSTM the fastest RNN-based architecture at the 7B scale, while competitive in performance with Transformers and other recurrent models. We believe that xLSTM's very high decoding speeds in combination with its good performance highlight its potential as foundational architecture for methods investing substantial compute at inference time. + +# Impact Statement + +This paper presents a novel architecture for fast and efficient language modeling, reducing computational costs and energy consumption without sacrificing performance. By making high-quality language models more accessible, our approach helps bridge the digital divide, enabling equitable AI deployment in low-resource settings. Additionally, the efficiency gains contribute to environmental sustainability by lowering the carbon footprint of large-scale NLP systems. However, there might be both positive and negative societal impacts. We are aware of the risks, but believe that our and the overall advancements in the field of machine learning technology provide a net benefit to society and the world. + +# Acknowledgements + +The ELLIS Unit Linz, the LIT AI Lab, and the Institute for Machine Learning are supported by the Federal State Upper Austria. The NXAI Lab is supported by NXAI GmbH, Linz, Austria. + +# References + +Ainslie, J., Lee-Thorp, J., de Jong, M., Zemlyanskiy, Y., Lebrón, F., and Sanghai, S. GQA: Training generalized multi-query transformer models from multi-head checkpoints, 2023. URL https://arxiv.org/abs/2305.13245. +Alkin, B., Beck, M., Poppel, K., Hochreiter, S., and Brandstetter, J. Vision-LSTM: xLSTM as generic vision backbone. In Proceedings of the International Conference on Learning Representations (ICLR), 2025. URL https://openreview.net/forum?id=SiH7DwNKZZ. +Allal, L. B., Lozhkov, A., Bakouch, E., Blázquez, G. M., Tunstall, L., Piqueres, A., Marafioti, A., Zakka, C., von Werra, L., and Wolf, T. SmolLM2 - with great data, comes great performance, 2024. +Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M. D., McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and Welleck, S. LLemma: An open language model for mathematics, 2023. +Ba, L. J., Kiros, J. R., and Hinton, G. E. Layer normalization. CoRR, abs/1607.06450, 2016. +Bai, Y., Lv, X., Zhang, J., He, Y., Qi, J., Hou, L., Tang, J., Dong, Y., and Li, J. LongAlign: A recipe for long context alignment of large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pp. 1376-1395, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.74. URL https://aclanthology.org/2024.findings-emnlp.74. +Beck, M., Poppel, K., Spanring, M., Auer, A., Prudnikova, O., Kopp, M., Klambauer, G., Brandstetter, J., and Hochreiter, S. xLSTM: Extended long short-term memory. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), 2024. URL https://arxiv.org/abs/2405.04517. +Beck, M., Poppel, K., Lippe, P., and Hochreiter, S. Tiled Flash Linear Attention: More Efficient Linear RNN and xLSTM Kernels. arXiv, 2503.14376, 2025. URL https://arxiv.org/abs/2503.14376. +Beeching, E., Fourrier, C., Habib, N., Han, S., Lambert, N., Rajani, N., Sanseviero, O., Tunstall, L., and Wolf, T. Open LLM Leaderboard. https://huggingface.co/spaces/open-llm-leaderboard-old/open_11m_leaderboard, 2023. +Ben Allal, L., Lozhkov, A., Penedo, G., Wolf, T., and von Werra, L. Cosmopedia, February 2024. URL https://huggingface.co/datasets/HuggingFaceTB/cosmopedia. + +Black, S., Biderman, S., Hallahan, E., Anthony, Q., Gao, L., Golding, L., He, H., Leahy, C., McDonell, K., Phang, J., Pieler, M., Prashanth, U. S., Purohit, S., Reynolds, L., Tow, J., Wang, B., and Weinbach, S. GPT-NeoX20B: An open-source autoregressive language model. In ACL Workshop on Challenges & Perspectives in Creating Large Language Models, 2022. URL https://arxiv.org/abs/2204.06745. +Botev, A., De, S., Smith, S. L., Fernando, A., Muraru, G.-C., Haroun, R., Berrada, L., Pascanu, R., Sessa, P. G., Dadashi, R., and et al. RecurrentGemma: Moving past transformers for efficient open language models, 2024. URL https://arxiv.org/abs/2404.07839. +Chen, Y., Qian, S., Tang, H., Lai, X., Liu, Z., Han, S., and Jia, J. Longlora: Efficient fine-tuning of long-context large language models. In Proceedings of the International Conference on Learning Representations (ICLR), 2024. +Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. +Dao, T. FlashAttention-2: Faster attention with better parallelism and work partitioning. In Proceedings of the International Conference on Learning Representations (ICLR), 2024. URL https://openreview.net/forum?id=mZn2Xyh9Ec. +Dao, T. and Gu, A. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. In Proceedings of the International Conference on Machine Learning (ICML), 2024. URL https://openreview.net/forum?id=ztn8FCR1td. +De, S., Smith, S. L., Fernando, A., Botev, A., Cristian-Muraru, G., Gu, A., Haroun, R., Berrada, L., Chen, Y., Srinivasan, S., Desjardins, G., Doucet, A., Budden, D., Teh, Y. W., Pascanu, R., Freitas, N. D., and Gulcehre, C. Griffin: Mixing gated linear recurrences with local attention for efficient language models, 2024. +DeepSeek-AI, Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., and et al. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning, January 2025. URL http://arxiv.org/abs/2501.12948.arXiv:2501.12948 [cs]. +Fourrier, C., Habib, N., Wolf, T., and Tunstall, L. Lighteval: A lightweight framework for llm evaluation, 2023. URL https://github.com/huggingface/lighteval. + +Fourrier, C., Habib, N., Lozovskaya, A., Szafer, K., and Wolf, T. Open llm leaderboard v2. https://huggingface.co/spaces/open-llm-leaderboard/open_11m_leaderboard, 2024. +Fu, D. Y., Dao, T., Saab, K. K., Thomas, A. W., Rudra, A., and Re, C. Hungry hungry hippos: Towards language modeling with state space models. In Proceedings of the International Conference on Learning Representations (ICLR), 2023. URL https://openreview.net/forum?id=COZDy0WYGg. +Gao, L., Tow, J., Abbasi, B., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu, J., Le Noac'h, A., Li, H., McDonell, K., Muennighoff, N., Ociepa, C., Phang, J., Reynolds, L., Schoelkopf, H., Skowron, A., Sutawika, L., Tang, E., Thite, A., Wang, B., Wang, K., and Zou, A. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/12608602. +Gemma Team. Gemma: Open models based on gemini research and technology. 2024a. URL https://arxiv.org/abs/2403.08295. +Gemma Team. Gemma 2: Improving open language models at a practical size, 2024b. URL https://arxiv.org/abs/2408.00118. +Glorioso, P., Anthony, Q., Tokpanov, Y., Whittington, J., Pilault, J., Ibrahim, A., and Millidge, B. Zamba: A compact 7b ssm hybrid model. arXiv preprint arXiv:2405.16712, 2024. +Grattaftori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., and et al. The Llama 3 herd of models. 2024. URL https://arxiv.org/abs/2407.21783. +Gu, A. and Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=tEYskw1VY2. +Guan, X., Zhang, L. L., Liu, Y., Shang, N., Sun, Y., Zhu, Y., Yang, F., and Yang, M. rStar-Math: Small LLMs can master math reasoning with self-evolved deep thinking, 2025. +Hao, S., Gu, Y., Ma, H., Hong, J., Wang, Z., Wang, D., and Hu, Z. Reasoning with language model is planning with world model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 8154-8173, 2023. + +He, B., Noci, L., Paliotta, D., Schlag, I., and Hofmann, T. Understanding and minimising outlier features in transformer training. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). +Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de las Casas, D., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., van den Driessche, G., Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., Vinyals, O., Rae, J. W., and Sifre, L. An empirical analysis of compute-optimal large language model training. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=iBBcRU1OAPR. +Hsieh, C.-P., Sun, S., Kriman, S., Acharya, S., Rekesh, D., Jia, F., Zhang, Y., and Ginsburg, B. Ruler: What's the real context size of your long-context language models? arXiv preprint arXiv:2404.06654, 2024. +Hua, W., Dai, Z., Liu, H., and Le, Q. Transformer quality in linear time. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), Proceedings of the International Conference on Machine Learning (ICML), volume 162, pp. 9099-9117. PMLR, 17-23 Jul 2022. URL https://proceedings.mlrpress/v162/hua22a.html. +Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. Transformers are RNNs: fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML), 2020. +Kocetkov, D., Li, R., allal, L. B., LI, J., Mou, C., Jernite, Y., Mitchell, M., Ferrandis, C. M., Hughes, S., Wolf, T., Bahdanau, D., Werra, L. V., and de Vries, H. The Stack: 3 TB of permissively licensed source code. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=pxpbTdUEpD. +Kraus, M., Divo, F., Dhami, D. S., and Kersting, K. xLSTM-Mixer: Multivariate time series forecasting by mixing via scalar memories. arXiv preprint, 2024. URL https://arxiv.org/abs/2410.16928. +Kwon, W., Li, Z., Zhuang, S., Sheng, Y., Zheng, L., Yu, C. H., Gonzalez, J., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pp. 611-626, 2023. +Lambert, N., Morrison, J., Pyatkin, V., Huang, S., Ivison, H., Brahman, F., Miranda, L. J. V., Liu, A., Dziri, N., Lyu, S., and et al. Tülu 3: Pushing frontiers in open language model post-training. 2024. + +LI, J., Beeching, E., Tunstall, L., Lipkin, B., Soletskyi, R., Huang, S. C., Rasul, K., Yu, L., Jiang, A., Shen, Z., Qin, Z., Dong, B., Zhou, L., Fleureau, Y., Lample, G., and Polu, S. NuminaMath. https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf, 2024. +Li, J., Fang, A., Smyrnis, G., Ivgi, M., Jordan, M., Gadre, S., Bansal, H., Guha, E., Keh, S., Arora, K., and et al. Datacomp-lm: In search of the next generation of training sets for language models. arXiv preprint arXiv:2406.11794, 2024. +Lieber, O., Lenz, B., Bata, H., Cohen, G., Osin, J., Dalmedigos, I., Safahi, E., Meirom, S., Belinkov, Y., Shalev-Shwartz, S., Abend, O., Alon, R., Asida, T., Bergman, A., Glozman, R., Gokhman, M., Manevich, A., Ratner, N., Rozen, N., Shwartz, E., Zusman, M., and Shoham, Y. Jamba: A hybrid transformer-mamba language model, 2024. URL https://arxiv.org/abs/2403.19887. +Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In Proceedings of the International Conference on Learning Representations (ICLR), 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. +Lozhkov, A., Ben Allal, L., von Werra, L., and Wolf, T. FineWeb-Edu: the finest collection of educational content, 2024. URL https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu. +Mistral AI Team. Codestral Mamba. https://mistral.ai/news/codestral-mamba/, 2024. Accessed: 2025-01-30. +Nguyen, V., Carilli, M., Eryilmaz, S. B., Singh, V., Lin, M., Gimelshein, N., Desmaison, A., and Yang, E. Accelerating PyTorch with CUDA graphs, October 2021. URL https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/. Accessed: 2025-01-30. +OLMo, T., Walsh, P., Soldaini, L., Groeneveld, D., Lo, K., Arora, S., Bhagia, A., Gu, Y., Huang, S., Jordan, M., Lambert, N., Schwenk, D., Tafjord, O., Anderson, T., Atkinson, D., Brahman, F., Clark, C., Dasigi, P., Dziri, N., Guerquin, M., Ivison, H., Koh, P. W., Liu, J., Malik, S., Merrill, W., Miranda, L. J. V., Morrison, J., Murray, T., Nam, C., Pyatkin, V., Rangapur, A., Schmitz, M., Skjonsberg, S., Wadden, D., Wilhelm, C., Wilson, M., Zettlemoyer, L., Farhadi, A., Smith, N. A., and Hajishirzi, H. 2 OLMo 2 furious, 2025. URL https://arxiv.org/abs/2501.00656. + +Pan, B. Anti-Haystack, February 2024. URL https://huggingface.co/datasets/wenbopan/anti-haystack. +Peng, B., Alcaide, E., Anthony, Q., Albalak, A., Arcadinho, S., Biderman, S., Cao, H., Cheng, X., Chung, M., Derczynski, L., and et al. RWKV: Reinventing RNNs for the transformer era. In Bouamor, H., Pino, J., and Bali, K. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 14048-14077, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.936. URL https://aclanthology.org/2023.findings-emnlp.936. +Peng, B., Goldstein, D., Anthony, Q. G., Albalak, A., Alcaide, E., Biderman, S., Cheah, E., Ferdinan, T., GV, K. K., Hou, H., Krishna, S., Jr., R. M., Muennighoff, N., Obeid, F., Saito, A., Song, G., Tu, H., Zhang, R., Zhao, B., Zhao, Q., Zhu, J., and Zhu, R.-J. Eagle and finch: RWKV with matrix-valued states and dynamic recurrence. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=soz1SEiPeq. +Ren, L., Liu, Y., Lu, Y., Shen, Y., Liang, C., and Chen, W. Samba: Simple hybrid state space models for efficient unlimited context language modeling, 2024. URL https://arxiv.org/abs/2406.07522. +Schmidinger, N., Schneckenreiter, L., Seidl, P., Schimunek, J., Hoedt, P.-J., Brandstetter, J., Mayr, A., Luukkonen, S., Hochreiter, S., and Klambauer, G. Bio-xLSTM: Generative modeling, representation and in-context learning of biological and chemical sequences. In Proceedings of the International Conference on Learning Representations (ICLR), 2025. URL https://openreview.net/forum?id=IjbXZdugdj. +Schmied, T., Adler, T., Patil, V., Beck, M., Poppel, K., Brandstetter, J., Klambauer, G., Pascanu, R., and Hochreiter, S. A large recurrent action model: xLSTM enables fast inference for robotics tasks, 2024. URL https://arxiv.org/abs/2410.22391. +Shah, J., Bikshandi, G., Zhang, Y., Thakkar, V., Ramani, P., and Dao, T. FlashAttention-3: Fast and accurate attention with asynchrony and low-precision, 2024. URL https://arxiv.org/abs/2407.08608. +Shazeer, N. Glu variants improve transformer, 2020. URL https://arxiv.org/abs/2002.05202. +Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-LM: Training multibillion parameter language models using model parallelism, 2020. URL https://arxiv.org/abs/1909.08053. + +Sun, Y., Dong, L., Huang, S., Ma, S., Xia, Y., Xue, J., Wang, J., and Wei, F. Retentive network: A successor to transformer for large language models. ArXiv, abs/2307.08621, 2023. URL https://api_semanticscholar.org/CorpusID:259937453. +Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023. URL https://huggingface.co/datasets/ tekrium/OpenHermes-2.5. +TogetherCompute. LongDataCollections, October 2023. URL https://huggingface.co/datasets/togethercomputer/Long-Data-Collections. +Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., and et al. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023. doi: 10.48550/ARXIV.2307.09288. URL https://doi.org/10.48550/arXiv.2307.09288. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), volume 30, pp. 5998-6008. Curran Associates, Inc., 2017. +Waleffe, R., Byeon, W., Riach, D., Norick, B., Korthikanti, V. A., Dao, T., Gu, A., Hatamizadeh, A., Singh, S., Narayanan, D., Kulshreshtha, G., Singh, V., Casper, J., Kautz, J., Shoeybi, M., and Catanzaro, B. An empirical study of Mamba-based language models. ArXiv, abs/2406.07887, 2024. URL https://apisemantic scholar.org/CorpusID:270391285. +Xiong, R., Yang, Y., He, D., Zheng, K., Zheng, S., Xing, C., Zhang, H., Lan, Y., Wang, L., and Liu, T. On layer normalization in the Transformer architecture. In III, H. D. and Singh, A. (eds.), Proceedings of the International Conference on Machine Learning (ICML), volume 119 of Proceedings of Machine Learning Research, pp. 10524-10533. PMLR, 13-18 Jul 2020. URL https://proceedings.mlr.press/v119/xiong20b.html. +Yang, A., Yang, B., Hui, B., Zheng, B., Yu, B., Zhou, C., Li, C., Li, C., Liu, D., Huang, F., and et al. Qwen2 technical report. 2024a. URL https://arxiv.org/abs/2407.10671. +Yang, S. and Zhang, Y. FLA: A triton-based library for hardware-efficient implementations of linear attention mechanism, January 2024. URL https://github.com/fea-org/flash-linear-attention. + +Yang, S., Wang, B., Shen, Y., Panda, R., and Kim, Y. Gated linear attention transformers with hardware-efficient training. In Proceedings of the International Conference on Machine Learning (ICML), 2024b. URL https://openreview.net/forum?id=ia5XvxFUJT. +Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems, volume 36, pp. 11809-11822. Curran Associates, Inc., 2023. URL https://openreview.net/forum?id=5Xc1ecxO1h. +Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023. +Zhang, B. and Sennrich, R. Root Mean Square Layer Normalization. In Advances in Neural Information Processing Systems 32, Vancouver, Canada, 2019. URL https://openreview.net/references/pdf?id=S1qBAf6rr. +Zuo, J., Velikanov, M., Rhaiem, D. E., Chahed, I., Belkada, Y., Kunsch, G., and Hacid, H. Falcon Mamba: The first competitive attention-free 7b language model. 2024. URL https://arxiv.org/abs/2410.05355. + +# A. xLSTM 7B Architecture Summary + +The xLSTM 7B architecture consists of 32 post-up projection blocks and is described in Fig. 1 and Tab. 4. We use the GPT-NeoX-20B tokenizer (Black et al., 2022) with vocabulary size 50257 and do not tie the weights for input layers (embedding) and output layers (logits). + +Table 4. Hyperparameters of xLSTM 7B. + +
NUM +PARAMSVOCAB +SIZENUM +BLOCKSMODEL +DIMNUM +HEADS
6,865,424,896502573240968
+ +![](images/7f87653b03eaea8e7f83de5caa2757184bd86333b03937517ec023b3ca0b8fb5.jpg) +Figure 8. Improved xLSTM Block. The lower part is a output-gated sequence-mix layer with the mLSTM at its core, whereas the upper part is a Gated MLP (SwiGLU) as a feature/channel-mix layer. Multiple Heads are shown in depth, larger light gray boxes without are linear layers. For the SwiGLU we use a projection factor of 2.66 matching common Transformers. For the query/ key dimension we use a factor of 0.5. The Norm layers are RMS norms (Zhang & Sennrich, 2019), the Headwise Norm is a Layernorm (Ba et al., 2016). + +# B. Training Recipe + +Optimization. Pre-training was conducted on a high-performance computing cluster comprising 128 NVIDIA H100 GPUs. We use Fully Sharded Data Parallel (FSDP) and activation checkpointing to reduce the parameter and activation memory footprint. We pre-train xLSTM 7B for a total of 550K (thousand) training steps with batch size 512 and context length 8192, encompassing a total of 2.3T (trillion) training tokens. We apply batch size ramp-up with batch size 128 for the first 2000 steps, 256 for the next 2000 steps, and the full batch size (512) afterward. We use the AdamW optimizer (Loshchilov & Hutter, 2019) with (peak) $\alpha = 5 \times 10^{-4}$ , $\beta_{1} = 0.99$ , $\beta_{2} = 0.95$ , $\epsilon = 10^{-8}$ , weight decay 0.1 and gradient clipping norm 0.5. The learning rate schedule comprises a linear warm-up over 3000 training steps, an exponential decay phase that spans 540,000 steps, and a linear cool-down lasting 7000 steps. The exponential decay factor is chosen so that $0.1 \times \alpha$ is reached after 500,000 steps. + +Sequence packing. Language datasets come with documents of highly varying lengths. To efficiently train a model by processing fixed sequence length sequences (e.g. 8192 tokens), multiple shorter documents are typically packed into a sequence, and the different documents are separated by an end-of-document (EOD) token. In order to avoid leaking information between independent documents that are packed into the same sequence, we reset the memory states of each mLSTM cell at the document borders signified by the EOD token. This can be easily achieved by explicitly setting the forget gate value to zero, resetting the memory state to the zero matrix. + +Dataset selection. We only use publicly available high-quality datasets for pre-training. The dataset selection is divided into two training stages: In the first stage lasting 500K (thousand) training steps, we train exclusively on the DCLM dataset (Li et al., 2024). In the second stage (50K steps) towards the end of the training, we use a combination of datasets that prioritizes math, coding, and question-and-answer (Q&A) data. The dataset proportions for the second stage are listed in the second column of Tab. 5. + +Similarly to Zuo et al. (2024), the second training stage includes a collection of small supervised fine-tuning (SFT) Q&A datasets to improve the model's understanding of texts involving questions and answers. These SFT datasets are all publicly available and consist of NuminaMath CoT (LI et al., 2024), MetaMathQA (Yu et al., 2023), Tulu v3.1 (Lambert et al., 2024), OpenHermes 2.5 (Teknium, 2023), GSM8K (Cobbe et al., 2021), and Smoltalk (subsets magpie-ultra, longalign, and self-oss-instruct) (Allal et al., 2024). + +For longer context training we replace the high-quality data cool-down by a longer context version keeping the number of tokens per step and the number of steps fixed. The batch size is reduced from 512 to 128, while increasing the context length to 32768. We replace a large share of the DCLM dataset part with long context text collections, namely LongDataCollections (TogetherCompute, 2023), LongAlign10k (Bai et al., 2024), AntiHayStack (Pan, 2024) and LongAlpaca12k (Chen et al., 2024), see third column of Tab. 5. + +Table 5. Dataset Proportions for second training stage in standard and longer context mode. + +
DATASET NAMEPROPORTION STANDARDPROPORTION LONGCTX
DCLM (Li et al., 2024)40%20 %
FineWeb-Edu (Lozhkov et al., 2024)15%15%
Cosmopedia (Ben Allal et al., 2024)10%10%
ProofPile-2 (Azerbayev et al., 2023)15%15%
TheStack (Kocetkov et al., 2023)15%15%
SFT datasets (see Sec. B)5%5%
LongDataCollections (TogetherCompute, 2023)-15%
LongAlign10k (Bai et al., 2024)-1%
AntiHayStack (Pan, 2024)-1%
LongAlpaca12k (Chen et al., 2024)-2%
+ +Ablation Training For hyperparameter tuning and ablation trainings ("-abl") at the 7B scale, we use a shorter training cycle with 76,000 training steps at context length 8192 and batch size 256, resulting in 160B tokens. We use a linear warmup of 3000 steps, cosine decay to $10\%$ of the peak learning rate at 75,000 steps and a linear cooldown of 1,000 steps to learning rate 0 at the end. Here, we only train on a subset of the DCLM dataset, without high-quality data in the late phase of pre-training. Peak learning rate and other training hyperparameters are the same as for the main training. + +# C. Experiments + +# C.1. Extended Evaluation + +To enable comparability to older models, we evaluate our models on the task selection from the first version of the HuggingFace leaderboard using HuggingFace's lighteval (Beeching et al., 2023; Fourrier et al., 2023). The results in Tab. 6 show that there is a trend upwards in metrics from older (e.g. Llama 2) to newer models (e.g. Llama 3.1), but that the differences and ordering between models vary across the tasks. + +Table 6. Model Performance on Huggingface Leaderboard v1 based on lighteval by HuggingFace. ↑ indicates larger values are better. + +
MODELARC-C ↑MMLU ↑HELLASWAG ↑WINOGRANDE ↑TRUTHFULQA ↑OPENBOOKQA ↑PIQA ↑AVERAGE ↑
TRANSFORMERS
Llama-3.1-8B0.5620.6630.7200.7450.3620.4470.8180.617
Llama-2-7B-hf0.5110.4680.6870.7060.3180.4120.7860.555
OLMo-7B-hf0.4430.2860.6730.6610.3010.3830.8010.507
Qwen2.5-7B0.6170.7530.7000.7170.4780.4580.8040.647
Gemma-7B0.5930.6400.7210.7400.3810.4360.8130.618
HYBRID MODELS
Zamba2-7B0.6720.6830.7400.8010.4790.4680.8020.664
RECURRENT MODELS
Falcon-Mamba-7B0.5990.6220.7090.7430.4590.4600.8220.631
Falcon-Mamba-7B (pre-decay)0.5200.5730.6990.7190.3120.4300.8010.579
Mamba-Codestral-7B (v0.1)0.4860.5010.6260.6180.3580.3800.7710.534
RWKV-v5-Eagle-7B0.4490.3130.6220.6630.3300.3930.7720.506
RWKV-v6-Finch-7B0.4710.4420.6560.6960.3470.3990.7920.543
xLSTM 7B0.5740.5780.7140.7380.4190.4480.8190.613
xLSTM 7B LCTX0.5160.5880.7150.7400.3740.4290.8190.597
+ +# C.2. Ablation Experiments + +Effect of the Pre-norm Layer Choice (Fig. 9). Here we assess the effect of different normalization layer choices for the pre-norm in (12) and the state-norm in (6), both for the xLSTM with a pre-up projection block of Beck et al. (2024) and our new post-up projection architecture used for xLSTM 7B. We use soft-capping and the negative input bias initialization (see Sec. 3.2 and 5.3) for both architectures. For this experiment, we train models with 1.4B parameters for 31,000 steps using context length 8192 and batch size 256. Fig. 9 shows the validation loss and gradient norm for the different architectures and normalization layer choices over the course of training (only the 15,000 steps are shown). As can be seen, using LayerNorm as the pre-norm layer leads to very large gradient norms and diverging validation loss after a few training steps, whereas models with RMSNorm train stably. For the state-norm layer, the norm type has no impact on the training dynamics. + +![](images/c766f9deb15c91f96c136fea0677e8685ab1e9dbf0e30054b1b4c2124c5b9a3a.jpg) +Figure 9. Comparison of pre-up projection and post-up projection blocks with different combinations of RMSNorm and LayerNorm. At each step, the plot shows the maximum gradient norm observed within the previous 50 steps. + +![](images/aabade3195a6cf6b518bbc365b7caacb8895e8e688edf649848cafe0afcc636b.jpg) + +Effect of Soft-Capping (Fig. 10). The two runs in Fig. 10 show the effect of soft-capping for two 7B sized xLSTM models trained for 76,000 steps at batch size 256 and context length 8192, for an effective 160B tokens. + +![](images/1ca23f8840b1e2d385c041730d173b7bce0423fb530c0617c2caf411e95726ed.jpg) +Figure 10. Effect of softcapping. Two 7B sized xLSTM models are trained with and without soft-capping for 160B tokens. The lower gradient norm noise on the right is a clear indicator for better model performance on the left of the model trained with softcapping. At each step, the plot shows the maximum gradient norm observed within the previous 50 steps. + +![](images/1b989dc079a067b37351e0758f75838fcd416782354579d684725f9c0c7509f5.jpg) + +Effect of Negative Input Gate Bias Init (Fig. 11). In this experiment we train 160M parameter models with batch size 128 and context length 4096 and vary the input gate bias initialization [0, -2, -5, -10]. The weights of the input gates are initialized to 0. + +In Figure 11 we observe that initializing the input gate biases at -10 effectively mitigates gradient norm spikes and reduces gradient norm variance during training. In our experiments up to 7B parameters we observed this behavior transfers across model scales. + +We therefore initialize the input gate biases to -10. For an extensive discussion of this behavior we refer to concurrent work by $\text{?}$ . + +![](images/87b87ea34dbc90103582acc88d17e03193420bc25e15987d203b58cd539af9d7.jpg) +Figure 11. Effect of the Bias Initialization. We conduct experiments with four different input gate biases at the 160M parameter scale, with validation loss on depicted to left and gradient norm on the right, along the training steps. The higher input gate bias initializations show large gradient norm spikes, which results in worse training results. Only the lowest initialization can maintain smooth and low gradient norms with at the best validation perplexities. The reason for this behavior is studied in more detail in (?) At each step, the plot shows the maximum gradient norm observed within the previous 50 steps. + +![](images/c4e6675332c5a8a2c630d6e95775860918b826c1607cfc7eae86620755cc7d3e.jpg) + +Effect of the Learning Rate Scheduler (Fig. 12). In our largest experiments, we choose a linear warmup followed by an exponential decay as a learning rate schedule in order to enable a continued pre-training with more tokens and without an additional warmup. However, smaller-scale experiments in Fig. 12 show the benefit of a cosine schedule over an exponential one. + +![](images/939cdef91bbc5971e4a6485a44fd5a5e3333e3d2b4d719575f42498c9dce4fa3.jpg) +Figure 12. Effect of Learning Rate Scheduler. The tested learning rate schedules are shown on the left, with the corresponding training perplexities on the right. While the exponential learning rate schedule can be continued trivially, the cosine schedule actually works slightly better given a fixed number of iterations. The learning rate cooldown to zero at the end gives a similar and significant benefit in both cases. + +![](images/c69a46683f2d60ec9b1d8ee09131bc20d18b8854dfb78cf497f30060e10fdc63.jpg) + +Effect of Memory State Size and Input Gate on Long Context Evaluations (Fig. 13, Tab. 7 and 8). In order to test the influence of the head numbers (cell dimensions) and input gate on long context abilities, we test the ablation models trained in Sec. 5.3 for their performance in the RULER benchmark (Hsieh et al., 2024). The results in Fig. 13 show that, while the effect of the head number and equivalently the recurrent memory is inconclusive, the models strongly benefit from the learnable, exponential input gate for the long context performance. + +![](images/b8c10d44ad7726ffba9c95626143c4c2bb8baae0626e60d1c77a63722926ba2d.jpg) +Figure 13. RULER average accuracies for different number of heads/cell dimensions, and fixed input gate. The ablations are trained on 160B tokens at 8k context. + +Additionally, we evaluate our ablation versions trained for 160B tokens and evaluated on the current and old HuggingFace LLM Leaderboard as in Tab. 1 and 6, respectively. Results in Tab. 7, 8 show only slight influence of the head dimensions or fixing input gate. Only fixing the input gate to the very small value of its standard bias initialization has a stronger impact on the Leaderboard v1. + +Table 7. Model Performance for different number of heads and non-trainable input gate on the Huggingface Leaderboard v2 tasks. $\uparrow$ indicates larger values are better. + +
MODELBBH ↑MMLU-PRO ↑MATH ↑MUSR ↑GPQA ↑IFEVAL ↑AVERAGE ↑
xLSTM 7B abl NH40.3060.1140.0040.3630.2530.1600.200
xLSTM 7B abl NH80.3040.1150.0020.3630.2480.1730.201
xLSTM 7B abl NH160.3170.1190.0020.3900.2580.1610.208
xLSTM 7B abl NH320.3270.1200.0010.3790.2560.1710.209
xLSTM 7B abl NH8 IGateFixed 00.3030.1170.0040.3810.2290.1490.197
xLSTM 7B abl NH8 IGateFixed -100.3080.1090.0000.3570.2530.1650.199
xLSTM 7B0.3810.2420.0360.3790.2800.2440.260
xLSTM 7B LCTX0.3900.2520.0400.3740.2530.2340.257
+ +Table 8. Model Performance for different number of heads and non-trainable input gate on the Huggingface Leaderboard v1 tasks. $\uparrow$ indicates larger values are better. + +
MODELARC-C ↑MMLU ↑HELLASWAG ↑WINOGRANDE↑TRUTHFULQA ↑OPENBOOKQA ↑PIQA ↑AVERAGE ↑
xLSTM 7B abl NH40.4920.2960.6650.6720.2820.4050.7980.516
xLSTM 7B abl NH80.4870.2920.6690.6800.3020.4260.7910.521
xLSTM 7B abl NH160.5050.3510.6680.7010.2940.4090.7960.532
xLSTM 7B abl NH320.5000.3780.6660.6760.3250.4110.7990.536
xLSTM 7B abl NH8 IGateFixed 00.4640.2920.6580.6720.2800.4150.7880.510
xLSTM 7B abl NH8 IGateFixed -100.2410.2500.3400.5190.2860.2260.6810.363
xLSTM 7B0.5740.5780.7140.7380.4190.4480.8190.613
xLSTM 7B LCTX0.5160.5880.7150.7400.3740.4290.8190.597
+ +# D. Speed Comparison to Optimized Inference Frameworks + +While we tune the baseline models' HuggingFace implementations for speed in a similar way as for our xLSTM 7B architecture, there are other frameworks, e.g. vLLM (Kwon et al., 2023), which are optimized further for inference speeds. In Figures (18, 19), we compare our HuggingFace generation speeds to vLLM generation speeds. In Figures (14, 15, 16, 17), we show the respective comparison for prefetch, prefetch+generation time and prefetch throughput. We compare xLSTM in the HuggingFace implementation to Llama-3.1-8B, Llama-2-7B-hf, Falcon-Mamba-7B and Mamba-Codestral-7B in both optimized HuggingFace and vLLM. For all vLLM speeds, we use PyTorch 2.6.0 to enable Codestral-Mamba-7b, whereas for the HuggingFace speed experiments, we use PyTorch 2.5.1. Although there are speed improvements in vLLM, especially for Transformer-based Llama models, xLSTM 7B continues to be the fastest model compared to others in both frameworks. + +![](images/3660b74d62efbf0deae8400671982a5c8411f217df4e198e0f557e511b34dac0.jpg) +Figure 14. Prefix time (time to first and to first 100 tokens) for HuggingFace implementations. Repetition of Figure 6 for comparison to vLLM on the right. + +![](images/25407c2ac33e8653d60e18817f15ae236083b16b63f8570a5321efbafe9dac1a.jpg) +Figure 15. Prefix time (time to first and to first 100 tokens) for vLLM implementations, except xLSTM 7B (HuggingFace implementation). + +![](images/62a62009fc68e296be5573b8b49ec5c81d0edeccc66c3ba168eab1b28584401e.jpg) +Figure 16. Pre-fill throughput for HuggingFace implementations. Repetition of Figure 7 to compare to vLLM on the right. + +![](images/3a6012e1fe03c5aa2302a201bae29dc39a2f94b05e74d14b795530f5cfd206e5.jpg) +Figure 17. Pre-fill throughput for HuggingFace implementations, except for xLSTM 7B (HuggingFace implementation). + +![](images/13fb3dea6257d3f67a37428b2afa0020f0448b5327d2789483a14cbe7270da7a.jpg) +Figure 18. Generation Time comparison HuggingFace to vLLM. While for short generation sequences, all models are close in their generation time, for the vLLM backend is faster for Transformers than for Mamba models within the 16k generated tokens. xLSTM in the HuggingFace backend continues to be the fastest model from short to long sequences. + +![](images/10cb2a3d8bd930cb1f57213ab36e052c074ad7c88d04b6d80a6400c1d5d10804.jpg) +Figure 19. Generation Throughput comparison HuggingFace to vLLM. Transformers (Llama models) show a strong speed up in vLLM, but still xLSTM is the fastest model throughout the prefetch context lengths. There is an odd transition for Mamba models from 4k to 8k prefetch context. + +# E. FLOP Counting + +We count the number of FLOPs in a forward pass of the mLSTM. We use a factor of 2 to describe the multiply accumulate cost. + +We use factors denoted as F_X to describe the number of FLOPs for operation X (e.g. F_exp for the exponential function). By default we set all of these factors to 1. + +# E.1. FLOPs for the mLSTM Operation + +- Inter-chunk recurrent: + +- Chunkwise gates: num_heads $\times$ numChunks $\times (0.5 \times$ chunk_size $\times$ (chunk_size + 1) + 2 $\times$ chunk_size +- Gates & max state: num_heads $\times$ numChunks $\times (3 + F_{\text{max}} + F_{\text{exp}} + \text{chunk\_size} \times (3 + 2 \times F_{\text{exp}}))$ +- Numerator: num_heads $\times$ numChunks $\times (2\times d_{-}qk\times d_{-}v + 4\times chunk\_ size\times d_{-}qk\times d_{-}v + 3\times chunk\_ size\times d_{-}qk)$ +- Denominator: num_heads $\times$ numChunks $\times$ (d_qk + 4 $\times$ chunk_size $\times$ d_qk) + +- Intra-chunk parallel: + +- Gate matrix: num_heads $\times$ numChunks + $\times (0.5 \times$ chunk_size $\times$ (chunk_size + 1) ++ chunk_size $\times$ chunk_size $\times$ (3 + F_mask + F_max + F_exp) ++ chunk_size $\times$ (1 + F_max)) +- Gated Attn logits: num_heads $\times$ numChunks $\times 2 \times$ chunk_size $\times$ chunk_size $\times (1 + d_{\mathrm{qk}})$ +- Numerator: num_heads $\times$ numChunks $\times 2\times$ chunk_size $\times$ chunk_size $\times$ d_v +- Denominator: num_heads $\times$ num_chunks $\times 2\times$ chunk_size $\times$ chunk_size +- Output combination: num_heads $\times$ num_chunks $\times$ (chunk_size $\times$ (1 + F_max) + chunk_size $\times$ (2 + F_abs + F_exp + F_max + 2 $\times$ d_v)) + +# E.2. FLOPs for the mLSTM in a Transformer Backbone + +For computing the number of FLOPs we follow the procedure from Hoffmann et al. (2022). We include the FLOPs contributed by the embedding matrices. We do not include RMS- or Layer-Norm and skip connection FLOPs. We assume that the backward pass has 2 times the number of FLOPs of the forward pass. For the forward pass, the number of FLOPs of the mLSTM for a single sequence can be approximated by: + +- Embeddings + +- $2 \times$ seq_len $\times$ vocab_size $\times$ d_model + +- mLSTM (single layer) + +- Query, key, value, input and forget gate projections: $2 \times$ seq_len $\times$ d_model $\times$ num_heads $\times$ (2 $\times$ d_qk + d_v + 2) +- Output gate and projection: $4 \times$ seq_len $\times$ d_model $\times$ num_heads $\times$ d_v + seq_len $\times$ num_heads $\times$ d_v $\times$ F_sig +- mLSTM cell: See above. + +- Gated Feedforward (single layer) + +- $6 \times$ seq_len $\times$ d_model $\times$ d_model $\times$ proj_factor_ff + $2 \times$ seq_len $\times$ d_model $\times$ F_swish + +- Final Logits + +- $2 \times$ seq_len $\times$ d_model $\times$ vocab_size + +- Total forward pass FLOPs: + +embeddings + num_layers $\times$ (mLSTM + feedforward) + final_logits + +# E.3. FLOPs for the Transformer with Self-Attention + +We use the FLOP computations from Hoffmann et al. (2022), with the difference that we use gated feedforward blocks. + +- Embeddings + +- $2 \times$ seq_len $\times$ vocab_size $\times$ d_model + +- Attention (single layer) + +- Key, query and value projections: +$2\times$ seq_len $\times$ d_model $\times$ num_heads $\times$ (2 $\times$ d_qk+d_v) +- Key @ query logits: $2 \times$ seq_len $\times$ seq_len $\times$ (d_qk $\times$ num_heads) +- Softmax: $3 \times$ seq_len $\times$ seq_len $\times$ num_heads +- Softmax @ query reductions: $2 \times$ seq_len $\times$ seq_len $\times$ (num_heads $\times$ d_qk) +- Final linear: $2 \times$ seq_len $\times$ d_model $\times$ (num_heads $\times$ d_v) + +- Gated Feedforward (single layer) + +- $6 \times$ seq_len $\times$ d_model $\times$ d_model $\times$ proj_factor_ff ++2×seq_len×d_model×F_swish + +- Final Logits + +- $2 \times$ seq_len $\times$ d_model $\times$ vocab_size + +- Total forward pass FLOPs: + +embeddings + num_layers $\times$ (attention + feedforward) + final_logits + +# F. Parameter Counting + +In this section we count the number of parameters in the mLSTM and compare it to the number of parameters in a Transformer with self-attention. We assume that the model does not use weight tying and omits biases. + +# F.1. Parameter Counting for the mLSTM + +- Embeddings + +-vocab_size $\times$ d_model + +- mLSTM (single layer) + +- qkv: d_model $\times$ num_heads $\times$ (2 $\times$ d_qk + d_v) +- Input and forget gate: $2 \times$ d_model $\times$ num_heads + $2 \times$ num_heads +- Output gate: d_model $\times$ d_model +- Output projection: d_model $\times$ d_model +- Norm: d_model + +- Gated Feedforward (single layer) + +- $3 \times$ d_model $\times$ d_model $\times$ proj_factor_ff + +- Norm (single layer) + +-d_model + +- Final Logits: + +-d_model $\times$ vocab_size + +- Total number of parameters: + +embeddings + num_layers $\times$ (mLSTM + feedforward + 2 $\times$ norm) + norm + final_logits + +# F.2. Parameter Counting for the Transformer with Self-Attention + +- Embeddings + +-vocab_size $\times$ d_model + +- Attention (single layer) + +- qkv: d_model $\times$ num_heads $\times$ (2 $\times$ d_qk + d_v) + +- Output projection: d_model $\times$ d_model + +- Gated Feedforward (single layer) + +- $3 \times$ d_model $\times$ d_model $\times$ proj_factor_ff + +- Norm (single layer) + +-d_model + +- Final Logits: + +-d_model $\times$ vocab_size + +- Total number of parameters: + +embeddings + num_layers $\times$ (attention + feedforward + 2 $\times$ norm) + norm + final_logits \ No newline at end of file diff --git a/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/images.zip b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7a00aaf98bfc31bb82444090788c0b5dc1928222 --- /dev/null +++ b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b649df0ba1196be619b54840c52e4f0f3fdc50c88042311ac827b8ccbdbe02a0 +size 1272149 diff --git a/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/layout.json b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e735e246b88d80d59b74b02d2277fbad6ac54161 --- /dev/null +++ b/ICML/2025/xLSTM 7B_ A Recurrent LLM for Fast and Efficient Inference/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0d662ac1a589900e10313ffa945d16773833b68fa87e751735a5e7c49fc6c22 +size 777845